<?xml version="1.0" ?>
<rss version="2.0">
   <channel>
      <item>
         <title>Why do I need hardware offloads, I have CPUs to burn!</title>
         <link>https://www.broadcom.com/company/blog/need-hardware-offloads-cpus-bum</link>
         <guid>https://www.broadcom.com/company/blog/need-hardware-offloads-cpus-bum</guid>
         <pubDate>March 6, 2012</pubDate>
         <description>It wasn’t that long ago that enterprise x86 computing was performed on single processor cores of just a few megahertz (Mhz). Getting data in and out of the computer was an expensive consumer of the processing resources. If you were serious about I/O, it made perfect sense to consider buying one of those fancy Host Bus Adapters (HBAs) that offloaded the I/O protocol processing to specialized processors made just for that, saving the computer processor to perform other general compute functions. But since then, processor technology has marched forward at a tremendous pace, processing speed has increased from a few MHz up to ~3Ghz, which is now the practical limit due to power/thermal efficiency issues. Multithreading, multi-cores and increased processor cache have also been big news in computing to the point where we now can have a tremendous amount of compute power in a very small space in the data center. Why do I need hardware offloads? I have CPUs to burn! This week, Intel announced availability of its new Xeon E5-2600 processor family, the platform codenamed “Romley” has a top model of which will be offered by server manufacturers with 16 physical cores and whole menu of other great technologies to improve performance and efficiency. So with all this new compute power, you may be thinking: “Why do I need hardware offloads? I have CPUs to burn!” Wikipedia is the first place to look to put water on that fire. Moore’s Law (1) is famous for predicting the long-term relationship of the growth of compute power, basically the doubling of processor performance every 18 months. Related to this is Wirth’s law, (2) which states that “software is getting slower more rapidly than hardware becomes faster” or Gate’s law “the speed of commercial software generally slows by 50% every 18</description>
      </item>
      <item>
         <title>Why Do We Need Hadoop?</title>
         <link>https://www.broadcom.com/blog/need-hadoop</link>
         <guid>https://www.broadcom.com/blog/need-hadoop</guid>
         <pubDate>January 18, 2015</pubDate>
         <description>Why do we need Hadoop or what problem does Hadoop solve in the current data centers? The simple answer is the rapid growth of social media, cellular advances and requirements for data analytics has challenged the traditional methods of data storage and data processing for many large business and government entities. To solve the data storage and processing challenges, organizations are starting to deploy large clusters of Apache Hadoop—a solution that utilizes parallel processing of large data sets commonly referred to as big data and creating multiple replications of the data to avoid any data loss. This is done across inexpensive, industry-standard servers that are used for both storing and processing the data. The Apache Hadoop architecture consists of the Hadoop common package, which provides file system and operating system (OS)-level abstractions, a MapReduce engine and the Hadoop Distributed File System (HDFS). To store a large file on the HDFS, the input file is split into smaller data sets and sent over to different nodes (servers) for parallel processing of data and the nodes hold the processed data. The framework, which is used for overall processing of data, is called MapReduce. Figure 1: Slicing of big data into smaller blocks as input to the DataNodes* Figure 2: Parallel processing of data on DataNodes* The Emulex OneConnect® family of OCe14000 10Gb Ethernet (10GbE) Network Adapters play an important role in the Hadoop cluster to move the data efficiently across the nodes in the cluster. With large amount of input/output data, it is very important to build a reliable network to make it more efficient. With Emulex OneConnect family of OCe14000 10GbE Network Adapters this reliable and efficient network can be deployed in any Hadoop cluster. Below is a sample configuration of a fine-node Hadoop cluster, which was implemented and tested in</description>
      </item>
      <item>
         <title>How to Configure Universal Multi-Channel for Emulex OneConnect OCe11102 10 Gigabit Ethernet Adapters</title>
         <link>https://www.broadcom.com/blog/configure-universal-multi-channel-oneconnect</link>
         <guid>https://www.broadcom.com/blog/configure-universal-multi-channel-oneconnect</guid>
         <pubDate>December 19, 2011</pubDate>
         <description>
	A basic of what’s required to get the Universal Multi-Channel (UMC) feature running with Emulex OCe11102 10 Gigabit Ethernet Adapters.

	 

	The Emulex Universal Multi-Channel feature provides administrators the ability to partition an Emulex OneConnect OCe11102 Ethernet adapter into 8 logical ports at varied bandwidths for storage (iSCSI or Fibre Channel over Ethernet [FCoE]) and Ethernet traffic types. Each logical port has a unique MAC, VLAN and bandwidth attributes and is mapped to 8 PCIe functions (PF0 – PF7). Basically, operating systems and hypervisors see eight independent physical adapters. UMC does not require SR-IOV, meaning you can deploy UMC on most operating systems and hypervisors in use today. For hypervisors such as VMware, a UMC port can be provisioned for various connection types such as virtual machine (VM), vSphere vMotion, iSCSI, NFS and host management traffic types.

	To get Universal Multi-Channel working:

	 

	UMC only works on Emulex OCe11102 Ethernet adapters. Download and load all drivers and firmware from the Emulex site here. Prerequisites:

	 

	
		Install Emulex OCe11102 10 Gigabit Ethernet (10GbE) adapter in a PCIe x8 slot.
	
		Load latest driver, firmware and OneCommand Manager software.
	
		Confirm software drivers loaded correctly.
	
		Reboot server to access Emulex PXESelect Utility


	 

	On server restart invoke the Emulex PXESelect ™ Utility by pressing.

	


	 

	Enable Multi-Channel Support then press , then Save configuration.

	 

	


	 

	From Port Selection Menu, select Controller and Port number

	 

	


	 

	Enable Administrative Logical Link, configure Bandwidth and assign logical port VLAN ID (LPVID).

	 

	


	 

	Windows Device Manager shows eight Emulex OneConnect OCe11102-F 10GbE adapters.

	 

	


	For a complete guide on how to deploy Emulex Universal Multi-Channel feature, check out our complete guide here.
</description>
      </item>
      <item>
         <title>Microsoft Windows 2012/2012 R2 Hyper-V VMs losing network connectivity: a workaround | The Implementer's Blog</title>
         <link>https://www.broadcom.com/blog/microsoft-windows-2012-2012-r2-hyperv-vms-losing-network</link>
         <guid>https://www.broadcom.com/blog/microsoft-windows-2012-2012-r2-hyperv-vms-losing-network</guid>
         <pubDate>June 18, 2014</pubDate>
         <description>UPDATE as of 10/21/14: We have made some VMQ updates and as a result posted the new 10.2.413.1 certified NIC driver on Emulex.com. The links below are for Emulex branded customers only. Please read the release notes carefully for important implementation details. Should you have any questions or need assistance contact Emulex tech support here. Windows 2012 R2 page: http://www.emulex.com/downloads/emulex/drivers/windows/windows-server-2012-r2/drivers/ Windows 2012 page: http://www.emulex.com/downloads/emulex/drivers/windows/windows-server-2012/drivers/ Below are the driver and firmware combinations that should be used for our OEM products supplied by HP and IBM. Please read and follow the specific instructions supplied by the OEM. Should you have any questions or need assistance contact the OEM technical support. HP Customers: NIC driver 10.2.413.1, FW 10.2.431.2 IBM Customers: NIC driver 10.2.413.1, FW 10.2.377.24 UPDATE as of 9/9/14: For HP customers using Emulex 10GbE adapters, HP has made publicly available the latest code that addresses the VM disconnect issue when VMQ is enabled among other enhancements. The download portal is currently located here: http://ow.ly/Bi7Yt. Please read, understand and follow the update documentation provided by HP and contact HP tech support for further information. Thank you for your continued patience. ~~ UPDATE as of 8/4/14: We are pleased to inform you that the July 2014 Special Release for Windows Server 2012 and Windows Server 2012 R2 CNA Ethernet Driver is now available for Emulex branded (non OEM) OCe111xx model adapters. Please refer to this link to download the driver kit and firmware. Please read and follow the special instructions within the Release Notes. For non-Emulex branded adapters, please contact Emulex Tech Support here. ~~ UPDATE AS OF 7/23/14: Emulex is in the process of rolling out updated Microsoft Windows 2012 and 2012 R2 VMQ solutions for our customers. Testing of a Windows WHCK certified NIC driver update will be completed in 1-2 weeks.</description>
      </item>
      <item>
         <title>How to Accelerate Workloads and Applications and Boost Performance with HP and Emulex | Connect and Converge with HP</title>
         <link>https://www.broadcom.com/company/blog/accelerate-workloads</link>
         <guid>https://www.broadcom.com/company/blog/accelerate-workloads</guid>
         <pubDate>January 20, 2015</pubDate>
         <description>IT architecture is currently driven by the convergence of key infrastructure and workload trends across multiple industry verticals, including social business, cloud, mobile and big data analytics. With the majority of workloads being virtualized, server virtualization has become the underlying computing foundation for this new architecture. Given this, many organizations have adapted their data centers by adding additional servers, network, and storage connections for each application workload. While these deployments can help add bandwidth to enable faster migration, customers are often left with underutilized resources and compute power, inefficient network sprawl, and more equipment to manage to ensure virtual machine (VM) processing and migration across the infrastructure. The new HP FlexFabric 650/556 adapters powered by Emulex I/O technology address these issues by delivering key capabilities to assist with streamlining application performance and VM transfers across the infrastructure. Overlay networking and remote direct memory access (RDMA) over converged Ethernet (RoCE) are two approaches that enable your administrators to reduce the processing overhead on server CPUs in both physical and virtual environments. This enables higher VM density per server, faster storage I/O access, and increased server efficiency with lower power consumption, and helps provide secure network scalability. Did you know that together, HP and Emulex are helping accelerate workloads and applications while boosting performance with new capabilities including the following? High-Performance Virtualization: The HP FlexFabric 650 and 556 adapters provide up to 46 percent better CPU effectiveness compared to standard NICs, increasing the number of virtual machines (VMs) supported per server; and a 4x performance increase in small packet network performance compared to previous generation adapters1. Rapid, Secure and Scalable Hybrid Cloud Connectivity: HP and Emulex deliver Virtual Extensible LAN (VXLAN) tunnel offload and RoCE technology with HP FlexFabric 650 and 556 adapters to accelerate applications in HP ProLiant and HP BladeSystem</description>
      </item>
      <item>
         <title>Delivering Gen 5 (16Gb) Fibre Channel Solutions for the Modern Enterprise | Connect and Converge with HP</title>
         <link>https://www.broadcom.com/blog/hp-cloud-system-vmware-vcloud-suites</link>
         <guid>https://www.broadcom.com/blog/hp-cloud-system-vmware-vcloud-suites</guid>
         <pubDate>February 8, 2015</pubDate>
         <description>Emerging and evolving critical workloads, higher-density virtualization, as well as cloud-based architectures push the limits of server and storage infrastructure. In addition, customers are considering new technologies such as flash-based storage and new Gen 5 (16Gb) Fibre Channel technologies that are shifting the focus from storage to interconnect. These trends are driving ever higher I/O and bandwidth requirements, driving the need for higher speeds, as well as more reliable networks. To address this growing trend, HP has announced the new HP Virtual Connect 16Gb 24-port Fibre Channel Module, doubling the Virtual Connect Fibre Channel bandwidth from 8Gb to 16Gb, delivering a ‘wire-once’ future-proofed solution for connecting HP BladeSystem c-Class servers to Fibre Channel storage. With industry-leading bandwidth and performance for high-density virtualization, this new module is ideal for customers requiring accelerated I/O for demanding workload applications – data-warehousing/big data, e-commerce, healthcare, and multi-media are key workloads that require 24x7x365 access and availability to growing data. The Gen 5 (16Gb) Virtual Connect enables high performance storage networking, with a future proofed Fibre Channel design that keeps up with these big data needs. HP and Emulex have a rich history in the development of Fibre Channel technology – the HP Gen5 (16Gb) portfolio provided by Emulex delivers cloud-optimized performance, reduced latency, protocol agility (common drivers and ASICs), and management simplicity. Designed for the HP BladeSystem environment, the HP LPe1605 dual-channel Gen5 (16Gb) Fibre Channel mezzanine Host Bus Adapter (SKU#718203-B21) delivers maximum performance in the broadest range of applications and environments. The HP Virtual Connect 16GFC 24-port Fibre Channel module and the LPe1605 adapter can be managed with HP Virtual Connect Enterprise Manager (with HP OneView support during this year). To learn more about the HP and Emulex Gen 5 FC solutions and how they can take your infrastructure to the next level,</description>
      </item>
      <item>
         <title>Installing or Updating Emulex Drivers on VMware ESXi 5.0</title>
         <link>https://www.broadcom.com/blog/installing-updating-emulex-drivers-vmware-esxi-5.0</link>
         <guid>https://www.broadcom.com/blog/installing-updating-emulex-drivers-vmware-esxi-5.0</guid>
         <pubDate>November 21, 2011</pubDate>
         <description>Most likely, you are not surprised to hear that VMware ESXi 5.0 users no longer have access to a Service Console. You may have also noticed that there are several new features and changes. One change is the install procedure to manually update or install Emulex drivers. Most of the Emulex drivers are inbox drivers and will need to be updated whenever a new version is released. I’d like to share the process for updating your Emulex drivers in this blog post. Other options you may wish to consider are auto deploy, or using the VSphere Management Assistant (vMA) appliance. Here are the steps to updating your drivers: Login with your VMware vSphere Client to vCenter Server. Select the host you want to update or install new drivers. Go into Tech Support Mode to enable SSH. It is a simple task to perform: Highlight the host-&gt; select Configuration Tab -&gt; then select Security Profile from the software table of contents. Highlight TSM-SSH then Properties. From your Windows or Linux client, download the Emulex driver for the adapter and store it in a temporary location. From your Windows or Linux client, run a program such as WinSCP for Windows and move the driver you downloaded from VMware’s website to the ESXi host. I prefer to place the Emulex driver in the /var/log/vmware directory. Next, SSH into the ESXi 5 host by using a tool called putty.exe Once logged in, run the following commands to install the driver: # esxcli software vib install –no-sig-check –maintenance-mode -d Example: #esxcli software vib install –no-sig-check –maintenance-mode –d Emulex-FCoE-FC-lpfc829-8.2.3.108.36-offline-bundle.zip Reboot the host to activate the new or updated driver If for some reason you need to remove the driver, execute the following esxcli command: # esxcli software vib remove –n –f We hope this helped you</description>
      </item>
      <item>
         <title>Dude, do you know how to install Emulex ESXi 4.1 drivers?</title>
         <link>https://www.broadcom.com/blog/how-to-install-esxi4.1-drivers</link>
         <guid>https://www.broadcom.com/blog/how-to-install-esxi4.1-drivers</guid>
         <pubDate>October 3, 2011</pubDate>
         <description>
	Have you seen the movie called Dude, Where’s My Car? I laugh 
every time I see the scene where they go back and forth on their tattoos:

	 

	“We got tattoos!”

	“What does mine say, Sweet!” … “What does mine say, “Dude!”

	“What does mine say, Sweet!” … “What does mine say, Dude!”

	 

	…And it goes on and on until they get tired of each other and fight about it. Well, it has nothing to do with this blog, but Dude! You do need to know where and how to install Emulex ESXi 4.1 drivers.

	 

	From time to time, I hear from Emulex customers and partners who need to know how to either install or update Emulex drivers for the LightPulse Fibre Channel Host Bus Adapters (HBAs) or OneConnect 10Gb Ethernet (10GbE) Universal Converged Network Adapters (UCNAs). In response to these questions, I’ve created a new Application Note that takes you through the process of installing the Emulex Fibre Channel ESXi drivers using the vSphere Command-line interface (vCLI).

	 

	


	 

	 

	After reading this app note, if you still have questions on installing or updating our drivers with VMware ESX, please post a comment to this blog, or contact Emulex Technical Support, or send an email to me here at implementerslab@emulex.com.
</description>
      </item>
      <item>
         <title>Look to RoCE with OFED to Increase Data Center Throughput and Efficiency</title>
         <link>https://www.broadcom.com/blog/look-roce-ofed-increase-data-center-throughput-efficiency</link>
         <guid>https://www.broadcom.com/blog/look-roce-ofed-increase-data-center-throughput-efficiency</guid>
         <pubDate>March 1, 2015</pubDate>
         <description>As the world of networking continues to advance at faster rates than ever before, more and more demand has been placed on data centers and compute clusters to keep up with the vast amount of data traveling on their networks. In high performance computing environments today, more traditional network protocols are not capable of providing the transfer speed required for the smooth operation of a large cluster. Enter the network protocol RDMA over Converged Ethernet (RoCE). RoCE, when used in conjunction with OpenFabrics Enterprise Distribution (OFED) software and the Emulex OneConnect® OCe14000 Ethernet network adapter, combines high throughput with low latency to provide the extreme transfer speeds coveted by these environments. Enabling RoCE requires many separate components and technologies, both software and hardware, so it’s helpful to briefly review what some of these key parts are. The network technology RoCE uses is Remote Direct Memory Access (RDMA), which is the direct memory access from the memory of one host into that of another one without involvement from the operating system. This is the main principle of how RoCE achieves faster speeds than traditional networking. By itself however, RoCE doesn’t cover all the networking steps to complete a successful data transfer because it only works with the data at low network levels, which are more in contact with the hardware (adapter). This is where OFED software is needed. OFED is open source software for RDMA and kernel bypass applications from the OpenFabrics Alliance. It providessoftware drivers, core kernel code, middleware and user level interfaces for multiple operating systems. In other words, RoCE can only get the data so far and then certain OFED software carries it the rest of the way. Another important aspect of RoCE is that it uses the Ethernet medium. Traditionally, Ethernet does not account for data loss,</description>
      </item>
      <item>
         <title>Best practices for adjusting the device queue depth of a 16GFC HBA in VMware Horizion View 6.0</title>
         <link>https://www.broadcom.com/blog/best-practices-adjusting-device-queue-depth-16gfc-hba-vmware</link>
         <guid>https://www.broadcom.com/blog/best-practices-adjusting-device-queue-depth-16gfc-hba-vmware</guid>
         <pubDate>April 1, 2015</pubDate>
         <description>There are many best practices with VMware solutions that cover networking, storage, deployment, virtual desktop infrastructure (VDI) and so on. Additionally, there are best practices on the same topics from each major OEM, such as HP, DELL, IBM, and Lenovo, to name a few. So of course, we had to jump on the bandwagon and create some of our own best practices with Emulex LightPulse® Gen 5 (16Gb) Fibre Channel (FC) Host Bus Adapters (HBAs) in a VMware Horizon View 6.0 environment. The result of this effort is in our new Implementer’s Lab Guide, Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters Starting with VMware vSphere 5.5, the ability to support end-to-end Gen 5 FC, the increasing demand for monster virtual machines (VMs) as well as newer virtual hardware versions makes it ideal for FC storage area networks (SANs) to step in and address scalability, cloud and VDI concerns. At our Emulex tech marketing labs, we took a stab at trying to understand workloads, block size, and I/O generated by a VDI environment with VMware Horizon View 6.0. We configured a single Xeon-based host with a Gen 5FC HBA connected to an all-flash array. The two FC ports were configured in a zone. The VMs all resided in a 4TB Logical Unit Number (LUN) from the all-flash array. We took a snapshot of a Windows 7 VM golden image and provisioned 200 VMs with VMware View Composer. The VMs were all running a single vCPU, 4GB HDD and 2GB RAM on Windows 7. To create a load and simulate a VDI environment, we used what I would call a static load with LoginVSI—an exceptional tool to measure and test ESXi in a VDI environment by running simulated workloads of different sizes. And of</description>
      </item>
      <item>
         <title>Federal Credit Union Banks On HP ProLiant Servers and Emulex I/O Connectivity Solutions | Connect and Converge with HP</title>
         <link>https://www.broadcom.com/company/blog/credit-union-banks-on-connectivity-solutions</link>
         <guid>https://www.broadcom.com/company/blog/credit-union-banks-on-connectivity-solutions</guid>
         <pubDate>November 21, 2014</pubDate>
         <description>
	A federal credit union in New York, with more than $3.8 billion in assets, serves the staff members, specialized departments, retirees and families of a well-known international agency. Executives need analytic reports to be available first thing in the morning, but as the institution’s SAN reached performance limits, delays interrupted timely delivery. Plus, the credit union’s IT team had to spend up to seven hours per week managing cluster performance issues. The solution to these workload challenges was an infrastructure deploying HP StoreFabric 16Gb Fibre Channel (FC) Host Bus Adapters (HBAs), HP Converged Network Adapters (CNAs) and HP ProLiant DL380 Gen8 Servers.

	 

	At the heart of the credit union’s state-of-the-art data center are virtualized Microsoft SQL Server clusters, which support I/O-intensive reporting and analytic applications. Different applications drive I/O to the credit union SAN during different parts of the day. During business hours, most SAN I/O comes from the credit union’s specialized core banking application suite, which processes ATM transactions, as well as transactions inside the credit union.

	 

	The servers are now equipped with redundant connectivity to high bandwidth 10Gb Ethernet (10GbE) LANs and 16GFC SANs. By deploying HP ProLiant Gen8 servers and high performance connectivity to LANs and SANs, the federal credit union reduced report generation time from eight hours to two hours per night. Once again, reports are ready in the morning, and there is even time to rerun reports if needed. Efficiency soared as the IT team reclaimed seven hours per week of performance management that was no longer needed. Business productivity increased as additional SAN bandwidth allowed other applications to access the SAN while reports were being generated.

	 

	To read the entire case study, please visit – http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA5-5798ENW&amp;cc=us&amp;lc=en

	 

	For more information on the HP and Emulex portfolio, please visit – www.emulex.com/hp
</description>
      </item>
      <item>
         <title>Error Message Resolution for Emulex OneCommand Manager VMware vCenter Plug-in v1.1</title>
         <link>https://www.broadcom.com/blog/error-message-resolution-onecommand-and-manager-vmware-vcenter</link>
         <guid>https://www.broadcom.com/blog/error-message-resolution-onecommand-and-manager-vmware-vcenter</guid>
         <pubDate>October 16, 2011</pubDate>
         <description>
	Emulex recently released OneCommand Manager for VMware 
vCenter Server 1.1 to support the release of VMware ESXi 5.0 (download it here and try it out!). Our technical marketing team has created an application note to help those who run into a privileges error when trying to register the plug-in.

	 

	Here’s what we found: After installing Emulex OneCommand Manager for VMware vCenter Server 1.1, you need to register the plug-in. Unfortunately, as you try to do so, this pop-up window appears as shown here:

	 

	


	 

	It’s easy to resolve, and only takes minutes. Click here to view the complete application note, Error Message Resolution for Emulex OneCommand Manager VMware vCenter Plug-in v1.1.
</description>
      </item>
      <item>
         <title>Blade Server I/O and Workloads of the Future: Advantage HP | Connect and Converge with HP</title>
         <link>https://www.broadcom.com/company/blog/blade-server-io-workloads-future-advantage</link>
         <guid>https://www.broadcom.com/company/blog/blade-server-io-workloads-future-advantage</guid>
         <pubDate>December 1, 2014</pubDate>
         <description>A new IT Brand Pulse report reviews the latest updates to the Cisco UCS blade system to the latest generation HP ProLiant BladeSystem Gen9 servers, based on the Intel Xeon E5-2600 processor family. Performance, consolidation and flexibility were identified as key to differentiated blade server environments in Web-scale environments. Hyperscale-driven applications and data center architectures require a new level of blade server infrastructure and networking I/O to meet growing demands of today and tomorrow. Comparatively, the UCS enhancements are components designed for the UCS Mini, with no significant changes to the high-end systems using the 5108 chassis. With only 1.2Tb per second (Tbps) of mid-plane bandwidth, the 5108 is limited in its ability to support more than 8 servers and single links greater than 10Gbps. The new HP BladeSystem c7000 Platinum chassis however, offers 7TBps of mid-plane bandwidth, with new support for 20Gb Ethernet (20GbE) downlinks, as well as 40GbE uplinks. The HP ProLiant Gen9 BladeSystem also takes converged networks to the next level with hardware offload of important new networking protocols supporting tunneling of L2 traffic over L3 networks, and scale-out file storage traffic. This tunnel offloading technology allows customers to accelerate business workloads/applications, boosting business performance and increasing virtual machine (VM) density per server, while lowering CPU and power utilization. Overall, the HP ProLiant BladeSystem Gen9 platform brings a new level of convergence, which will allow for resources to be allocated at a very granular level, improving efficiencies and ensuring optimal performance as workload demands change. The I/O connectivity solutions in the HP ProLiant BladeSystem Gen9 platform are provided by Emulex. Based on the Emulex fourth-generation OneConnect® Converged Network Adapter (CNA) technology, the 20GbE 2-port HP 650FLB and 650M adapters are the industry’s first CNA to support Local Area Network (LAN), Storage Area Network (SAN) and RDMA over</description>
      </item>
      <item>
         <title>Emulex VMware vSphere® 5.1 Web Client plug-in and the missing step</title>
         <link>https://www.broadcom.com/company/blog/emulex-vmware-vsphere-51-web-client-plug-missing-step</link>
         <guid>https://www.broadcom.com/company/blog/emulex-vmware-vsphere-51-web-client-plug-missing-step</guid>
         <pubDate>January 17, 2013</pubDate>
         <description>Emulex recently announced support for the new VMware vSphere® 5.1 Web Client with Emulex OneCommand Manager plug-in for VMware vCenter ™ version 1.4.10. So of course, I download the plug-in and replaced my older version. I found out the original OneCommand Manager plug-in for the VMware vCenter desktop client works and installs the same way. But the Web Client is a bit different. I found out I need an extra step to have this puppy working with my Web Client. My intent in this blog is to inform you of a step that’s different in the configuration process for the plug-in. After trying a few times to get it to appear correctly, I gave in and searched VMware’s documentation. That’s right – I read the manual – in my case. I came across VMware vSphere 5.1 API/SDK Documentation (By default, the plug-in is disabled and does not show up in the Web Client.) When you install the OneCommand Manager plug-in for VMware vCenter version 1.4.10, it will have the plug-in for the Web Client. If you are able to get the plug-in to work through the VMware vCenter desktop client, you should be able to install it for the Web Client. Of course, you must have VMware single sign-on working, the VMware vSphere 5.1 Web Client installed and working, your credentials all taken care off and the correct CIM providers installed to get the plug-in registered and running. So here’s what we had to do to get the Web Client to appear under “Classic Solutions: for both cluster and host.” First, the file called webclient.properties in the VMware vSphere Web Client install directory needs to be unhidden, To do that, we need to unhide the %Program Data% directory. Open windows explorer Select C drive Press the alt key to bring</description>
      </item>
      <item>
         <title>Having trouble diagnosing those hard to solve I/O performance issues? Let your initiators do the heavy lifting.</title>
         <link>https://www.broadcom.com/company/blog/diagnosing-hard-io-performance-issues</link>
         <guid>https://www.broadcom.com/company/blog/diagnosing-hard-io-performance-issues</guid>
         <pubDate>August 23, 2011</pubDate>
         <description>This is the first installment in a series of blogs that will discuss SAN performance monitoring and troubleshooting. It was a typical crisis in the data center. Another application slowdown has the team working late nights, and working with vendors to determine whose equipment is causing problems. Application performance monitoring says the servers have CPU and memory to spare. Storage Resource Monitoring (SRM) tools are telling you there are loads of capacity and bandwidth, but the application is still unresponsive and the trouble tickets are pouring in. Sound familiar? In spite of the numerous tools available to administrators today, often understanding and overcoming application I/O performance problems requires a deeper understanding of the protocol conversations occurring between devices in the storage network. Traditional tools leave administrators with I/O ‘blind spots’. Diagnosing problems in these blind spots forces administrators to start searching for clues in hard to reach places. This search often involves adjusting driver settings, attaching ‘taps’ to capture traffic, contacting equipment vendors, and re-educating your team on the finer details of Storage protocols. There are the obvious ‘capacity’ related slow-downs which occur when interconnect or storage equipment is overloaded. In these cases, the adapters, switches, or storage arrays are not able to handle the load. These types of ‘physical’ limits can significantly affect I/O performance but are often easily resolved by procuring more capacity or redistributing the I/O load. To avoid them, most organizations deploy tools that monitor and report when certain physical thresholds have been exceeded before they become problems. Lesser known are the many ‘soft’ performance issues caused by misbehaving or misconfigured infrastructure attached to the SAN. These ‘harder to detect’ problems can silently impact the performance of other devices (servers) sharing the same infrastructure. Too often, the first signs of trouble are alerts sent from application</description>
      </item>
      <item>
         <title>What’s an “Error 1327: Invalid Drive E”?</title>
         <link>https://www.broadcom.com/blog/what-s-an-error-1327-invalid-drive-e</link>
         <guid>https://www.broadcom.com/blog/what-s-an-error-1327-invalid-drive-e</guid>
         <pubDate>October 24, 2012</pubDate>
         <description>
	Last week, I was trying to uninstall OneCommand Manager and 
VMware Update Manager from the same Windows 2008 server. I kept getting a pop window with a message “Error 1327: Invalid Drive E”. So like almost everyone, when something unknown pops up, we refer to the Internet. I saw several postings with regards to “Invalid Drive E:” and a few other drive letters. All seemed to relate to either a system folder mapped to a network driver, changing the CD-ROM letter or a possible corrupt registry key error. I took a look at my registry key settings and all pointed to the correct path. I then picked one of the links from my search and used Adobe’s help forum. I basically followed the solution 1 and it seems to have corrected the problem. Here is the link I used: http://helpx.adobe.com/creative-suite/kb/error-1327-invalid-drive-drive.html

	 

	Basically, go to a command prompt and use the DOS command called “subst” to remove the drive letter.

	 

	


	
		Select Start » Run
	
		Type cmd command
	
		Type the command as shown in the image above – “C:\&gt;subst E: C:\” -and press Enter
	
		Type exit to close the command window
	
		Attempt to uninstall the OneCommand Manager


	 

	In my case, both OneCommand Manager and VMware Update Manager successfully uninstalled from the server.
</description>
      </item>
      <item>
         <title>Sometimes you need to see the protocol conversation to understand and solve SAN performance issues</title>
         <link>https://www.broadcom.com/company/blog/need-protocol-conversation-solve-san-performance-issues</link>
         <guid>https://www.broadcom.com/company/blog/need-protocol-conversation-solve-san-performance-issues</guid>
         <pubDate>October 4, 2011</pubDate>
         <description>This is the second installment in a series of blogs that will discuss SAN performance monitoring and troubleshooting. Consider this situation faced by one of our ‘large financial’ customers with a complex SAN environment running critical trading applications. This customer had been experiencing periodic performance problems with a Windows cluster running a critical business application. In a nutshell, application I/O was taking too long, internal timers would expire and the application would shut down. After each occurrence, administrators would quickly stabilize the application, balancing the need to collect information about the issue with the requirement to minimize the amount of money the company was losing. Many trouble tickets where opened and symptoms were well-understood, but a root cause could not be found using the tools available to the team. Eventually, vendors were asked to help prove that their equipment was not at fault. One by one, each vendor used their own management applications to demonstrate that no faults existed in their equipment. Finally, under the guidance of Emulex Technical Support, the customer enabled extended logging on the servers experiencing the slowdown. This extended logging allowed for the typically hidden Extended Link Service (ELS) and SCSI protocol events captured by the Emulex adapters to be collected in a LOG file. These low-level events are processed by the adapters and driver software but are not reported and are certainly not available through any native OS API or event interface. After a brief review of the protocol events collected, the teams identified an unexpected pattern: the storage target was repeatedly sending Port Logout (LOGO) commands to each server in the Windows cluster. This LOGO command would cause all outstanding I/O operations to be cancelled, which would require each server to login (again) and re-send outstanding I/O. Although some number of I/O would complete,</description>
      </item>
      <item>
         <title>Blog Series Part 2: Can the global advance disk parameter Disk.DiskIOMaxSize make a difference with software or hardware FCoE adapters running large block I/O in VMware vSphere® 5.1?</title>
         <link>https://www.broadcom.com/blog/part-2-can-the-global-advance-disk-parameter</link>
         <guid>https://www.broadcom.com/blog/part-2-can-the-global-advance-disk-parameter</guid>
         <pubDate>May 28, 2013</pubDate>
         <description>This blog is the second in a two part series that examines Fibre Channel over Ethernet (FCoE) implementations with VMware vSphere 5.1 using VMware’s software FCoE and hardware FCoE adapter. These blogs are intended to share our findings regarding the relative performance of software and hardware FCoE adapters when working with large-block, sequential I/O – in particular, the impact of the Disk.DiskIOMaxSize setting on storage performance. Keep in mind that your results will be different, as not all environments are the same. Testing should be done to experience the behavior in your own lab environment. In previous lab tests with software FCoE and a few virtual machines (VMs), we encountered an unexpected drop in throughput (MB/s) starting at around 64K block I/O. Once we made a change to Disk.DiskIOMaxSize, we were able to improve throughput; however, we continued to see poor latency response times. As the second part of our experiment, we installed and tested a supported converged network adapter (CNA) featuring hardware FCoE (offload) using a single port. We left the default setting of 32727 in the advanced parameter settings. After running the tests, we looked at I/O operations per second (IOPS, throughput, CPU utilization and latency. We first looked at the IOPS and throughput measurement. The chart below show a similar sloping curve in which IOPS are high with smaller block and high CPU utilization on the VM. Both software FCoE and hardware FCoE had similar slopes, but hardware FCoE produced more I/O operations with smaller block sizes. Both hardware and software FCoE offered similar IOPS performance for larger block sizes. Figure 1. Hardware FCoE adapter I/Os with default Disk.DiskMaxIOSize setting Next, we wanted to know if there was a difference in behavior for hardware FCoE versus software FCoE in terms of throughput, especially since this is where</description>
      </item>
      <item>
         <title>New HP CloudSystem Solutions for VMware vCloud Suite – Enabled by HP I/O Provided by Emulex | Connect and Converge with HP</title>
         <link>https://www.broadcom.com/blog/hp-cloud-system-vmware-vcloud-suite</link>
         <guid>https://www.broadcom.com/blog/hp-cloud-system-vmware-vcloud-suite</guid>
         <pubDate>September 7, 2012</pubDate>
         <description>Last week at VMworld in San Francisco, HP announced the expansion of their HP Converged Cloud Portfolio with VMware vCloud Suite 5.1, enabling clients to transform traditional virtualization deployments into open, private and hybrid cloud environments with less risk and complexity. Based on the HP strategy of Converged Infrastructure, the solution is comprised of infrastructure and services that together deliver optimal efficiencies for today’s demanding data centers. The cutting edge components in this solution consist of HP ProLiant and BladeSystem Generation 8 (Gen8) servers, converged LeftHand and 3PAR storage, and HP FlexFabric networking for optimizing and streamlining I/O performance, along with tight VMware and management integration. These updated HP VirtualSystem solutions, create the foundation for new, turnkey, prepackaged HP CloudSystem solutions optimized for VMware Cloud Infrastructure Suites. The I/O connectivity in each of these solutions is a critical component to enabling the production and movement of virtual machines (VMs). The HP FlexFabric 10Gb Ethernet (10GbE) adapters provided by Emulex are the chosen adapters to drive optimal performance and streamline VM traffic in these solutions, averaging 20 percent more VMs per server when combined with the latest release of VMware vSphere.* Having been the I/O solution of choice in these VirtualSystem solutions since inception, the HP-branded 10GbE/Fibre Channel over Ethernet (FCoE) adapters provided by Emulex are essentially, converged-ready Network Interface Cards (NICs) that provide all-in-one capabilities for FCoE, iSCSI, and TCP/IP deployments – and they’re offered at the price of a straight 10GbE NIC! This makes transitioning from HP G7 to Gen8 simple, as the HP 10GbE technology provided by Emulex was leveraged in the G7 blade platform – so the same drivers, firmware and management tools are used, providing a seamless upgrade path. The new HP CloudSystem solutions allow customers to accelerate their journey to the cloud by offering a</description>
      </item>
      <item>
         <title>Blog Series Part 1: Can disk parameter Disk.DiskIOMaxSize make a difference with large I/Os in VMware vSphere® 5.1?</title>
         <link>https://www.broadcom.com/blog/blog-series-part-1-disk-parameter</link>
         <guid>https://www.broadcom.com/blog/blog-series-part-1-disk-parameter</guid>
         <pubDate>April 23, 2013</pubDate>
         <description>This blog is the first in a two part series that examines Fibre Channel over Ethernet (FCoE) implementations with VMware vSphere 5.1 using VMware’s software FCoE and a hardware FCoE adapter. These blogs are intended to share our findings regarding the relative performance of software and hardware FCoE adapters when working with large-block, sequential I/O – in particular, the impact of the Disk.DiskIOMaxSize setting on storage performance. In recent lab tests with software FCoE and a few virtual machines (VMs), we encountered an unexpected drop in throughput (MB/s) with large block I/O. We were using sequential I/O through a single physical 10Gb Ethernet (10GbE) port. The VMs were running Microsoft Windows 2008 R2; each was configured with four virtual CPUs (vCPUs) and 8GB of memory. Two raw device mapping (RDM) disks were mapped to each host. We enabled the software FCoE driver that comes with the hypervisor and made appropriate LUN mappings. The IOmeter software tool was used to test a range of block sizes (512B – 1MB) across all RDM drives, with two workers per VM – one set to test 50% reads and the other to test 50% writes for full duplex mode. The targets used in this case were four Linux-based storage memory emulators with four targets each, for a total of 16 targets. Figure 1 shows the results for these sequential I/O tests when we used the default setting for Disk.DiskMaxIOSize. This figure represents the baseline performance for software FCoE. Figure 1. I/Os with default Disk.DiskMaxIOSize setting. using software FCoE. With larger block sizes, the array was unable to perform any I/Os. Figure 2 shows throughput during the same test of software FCoE and, in particular, the drop-off that occurred with larger block sizes. At this point, we theorized that the array became stressed with blocks</description>
      </item>
      <item>
         <title>FreeBSD Networking with Emulex OneConnect® Ethernet Adapters</title>
         <link>https://www.broadcom.com/blog/freebsd-networking-emulex-oneconnect-ethernet-adapters</link>
         <guid>https://www.broadcom.com/blog/freebsd-networking-emulex-oneconnect-ethernet-adapters</guid>
         <pubDate>August 19, 2013</pubDate>
         <description>A few months back, the question of “tuning” the Emulex FreeBSD driver came up and it took me back to the days when I would spend time “tuning” largely unroadworthy 1960’s and 1970’s British cars on weekends and evenings (I live in the UK so it seemed like a good idea at the time!). It was a belief that this was “performance tuning” but in reality, if the thing started without a push, it was a bonus. But it always felt like hours spent tweaking timings, gapping spark plugs and balancing Skinner Unions(SU) carburettors with a variety of tubes and tuning gadgets was worth all the time and blood lost. Network card driver tuning – what could be more fun? If we look at the traditional customer base for Emulex products, it has been the sort of enterprise level data centres who usetraditional operating systems (OS) from the likes of Red Hat, SuSE, Microsoft, VMware and OEM UNIX derivatives. These “paid for” OSes (money up front and continuing support) have formed the backbone of our IT world and have been the focus of our driver development for Fibre Channel and Ethernet products. But the IT world is changing and we are seeing new dynamic types of customer who are willing and able to take open source software to build new data centres for the world of big data and cloud solutions. One way Emulex has responded to this is to increase OS support outside of the “usual suspects” to embrace not only the community versions of Red Hat (Centos) but also Debian, Ubuntu and FreeBSD. FreeBSD is an interesting OS that is often seen as a less showy alternative to the myriad of Linux distributions. Just getting its’ head down and getting on with the job, FreeBSD is quietly running</description>
      </item>
      <item>
         <title>How to Build an OpenStack Cloud Computing Environment in High Performance 10GbE and 40GbE Networks</title>
         <link>https://www.broadcom.com/company/blog/build-openstack-cloud-computing-environment-high-performance-10gbe-40gbe-networks</link>
         <guid>https://www.broadcom.com/company/blog/build-openstack-cloud-computing-environment-high-performance-10gbe-40gbe-networks</guid>
         <pubDate>March 8, 2015</pubDate>
         <description>OpenStack is a free set of software and tools for building and managing cloud computing environments for public and private clouds. It is considered a cloud operating system that has the ability to control large pools of compute, networks and storage resources throughout a data center, and provides the following capabilities: Networks Virtual machines (VMs) on demand Storage for VMs and arbitrary files Multi-tenancy If you’re a regular follower of our Implementer’s Lab Blog, however, chances are you’re technically savvy and already understand the benefits that OpenStack brings to the table for building private clouds and Infrastructure as a Service (IaaS) offerings. Our guess is that many of you have found yourself at the next stage of analyzing how to build an OpenStack cloud computing environment in a high performance 10Gb Ethernet (10GbE)or 40GbE network. In anticipation of this, our engineers set out to configure OpenStack (Icehouse release) in Red Hat Enterprise 6.5 with Emulex OneConnect® OCe14100 10GbE adapters using Emulex Network Interface Card (NIC) partitioning technology. The Emulex OneConnect OCe14000 family of 10GbE and 40GbE Network adapters are optimized for virtualized data centers that have increased demand for accommodating multiple tenants in cloud computing applications. And with Emulex Universal Multi-Channel™ (UMC) and Emulex OneCommand™ Manager technology as the underlying networking essentials and tools, Emulex provides an ideal solution for building cloud computing environments. OpenStack Cloud Convergence with Emulex OCe14000 Ethernet Adapters After months of tests and validation, we created a solution design guide, “OpenStack Cloud Convergence with Emulex OCe14000 Ethernet Adapters”,to walk you through the steps to configure Emulex OneConnect OCe14000 adapters in a basic three-node OpenStack cloud configuration. It provides an easy to follow blueprint leveraging unique Emulex I/O connectivity capabilities for allocating bandwidth, converging multiple protocols, and safely isolating OpenStack core networks or applications. This rest of</description>
      </item>
      <item>
         <title>New IT Brand Pulse Report – Vonage’s Journey to a Virtualized Environment with HP BladeSystem | Connect and Converge with HP</title>
         <link>https://www.broadcom.com/blog/vonages-journey-hp-blades</link>
         <guid>https://www.broadcom.com/blog/vonages-journey-hp-blades</guid>
         <pubDate>August 28, 2012</pubDate>
         <description>IT Brand Pulse recently published a new application report showcasing Vonage, and their journey from migrating to a discrete “rack-based” environment to a more virtualized environment based on HP BladeSystem G7 servers, HP Virtual Connect Flex-10, HP FlexFabric 10Gb Ethernet (10GbE) adapters provided by Emulex, and VMware ESXi 5.0. The Vonage migration from discrete data center to virtualized data center involved many new technologies, products, and processes there were deployed in their Holmdel, New Jersey corporate headquarters. Blade servers, server virtualization and network virtualization are core technologies key to slashing costs, while maintaining application performance and availability. The rack to blade migration resulted in the servers performing the same functions, but at a much smaller floor space, power, cooling and cable requirements. More consolidation and savings was found with the deployment of server virtualization. Server virtualization allows Vonage to fully utilize the compute power of each physical blade server by running multiple virtual machines (VMs) and applications. To accommodate the proliferation of VMs, HP BladeSystem feature virtual networking capabilities which allow server admins to configure unique virtual networks for each VM. Embedded on HP BladeSystem G7 servers are dual-port integrated HP Virtual Connect 10Gb FlexFabric Adapters, which are based on Emulex technology. The combination of blade servers, server virtualization and network virtualization, allow hundreds of VMs to be deployed in a single cabinet! So how has Vonage been able to quantify these results? Overall, the company will consolidate 1,100 rack-mount servers, 40 cabinets and 3,000 cables into only two cabinets, four blade server chassis, and a handful of cables. Along the way, Vonage architects have identified a best practice for configuring I/O for live migrations, and a killer application for 10GbE. Download the paper here to read more insights and lessons learned by Vonage. This is a great example of</description>
      </item>
      <item>
         <title>Virtual Network Fabric Performance Improvements Using Emulex VNeX Technology</title>
         <link>https://www.broadcom.com/blog/vxlan-performance-improvements-emulex-vnext-technology</link>
         <guid>https://www.broadcom.com/blog/vxlan-performance-improvements-emulex-vnext-technology</guid>
         <pubDate>January 21, 2014</pubDate>
         <description>The Emulex OneConnect OCe14000 family of 10Gb and 40Gb Ethernet (10GbE and 40GbE) Network Adapters and Converged Network Adapters (CNAs) are the first of their kind to be designed and optimized for Virtual Network Fabrics (VNFs). Key to this claim is Emulex Virtual Network ExcelerationTM (VNeX) technology which, among other things, restores the hardware offloads that are normally lost because of the encapsulation that takes place with VNFs. For a VMware environment that is utilizing a Virtual Extensible LAN (VXLAN) VirtualWire interface, most Network Interface Cards (NICs) will see a significant reduction in throughput performance due to losing the NIC hardware offloads, and a loss of hypervisor CPU efficiency, due to it having to now perform much of the computation that the NIC otherwise would have done. The OneConnect OCe14000 adapters by default use VNeX to restore the offload processing in the hardware, thus providing non-virtual network levels of throughput and hypervisor CPU efficiency in VNF environments. To prove this point, we setup a VXLAN working model using two VMware ESXi5.5 host hypervisors and configured a VXLAN network connection between them. Each server hosted eight RHEL6.3 guest virtual machines (VMs) with network access between the hypervisors using the VMware VirtualWire interface. As a network load generator, we used IXIA IxChariot to perform a network performance tests between the VMs. We compared two test cases, one with the hardware offloads enabled on the OCe14000 (this is the default behavior) and another to a NIC that does not utilize hardware offloads for VXLAN. You can see in chart 1 that the bi-directional throughput with hardware offloads is as much as 70 percent greater when compare to a NIC without the hardware offloads. In chart 2, you can see the impact that hardware offloads have on hypervisor CPU utilization, the OCe14000 adapter with</description>
      </item>
      <item>
         <title>Emulex OCe14000 family of Ethernet and Converged Network Adapters bring new levels of performance and efficiency</title>
         <link>https://www.broadcom.com/blog/emulex-oce14000-family-ethernet-converged-network-adapters-bring</link>
         <guid>https://www.broadcom.com/blog/emulex-oce14000-family-ethernet-converged-network-adapters-bring</guid>
         <pubDate>July 7, 2014</pubDate>
         <description>When we launched the OneConnect® OCe14000, our latest Ethernet and Converged Network Adapters (CNAs), we touched on a number of performance and data center efficiency claims that are significant enough to expand on. The design goals of the OCe14000 family of adapters was to take the next step beyond what we have already delivered with our three previous generations of adapters, by providing the performance, efficiency and scalability needs of the data center, Web-scale computing and evolving cloud networks. We believe we delivered on those goals and can claim some very innovative performance benchmarks. A 4x Improvement in Packet Performance vs. Previous Generation Adapters Fundamental to delivering high Network Interface Card (NIC) performance under all conditions is to have a characteristic of handling a high rate of incoming/outgoing Ethernet packets. This is often referred to as frame rate or packet rate and expressed in terms of how many can be transferred per second, so the terms FPS or PPS (frames per second or packets per second) are often used. There are a few reasons why this is important. First is the fact that if the number of frames coming into the receiver is higher than can be processed by the NIC, the remainder are simply dropped. There are a number of application allowances or upper level protocol methods to work around dropped frames, all of which are less ideal than simply not dropping the frames in the first place. We designed the OCe14000 family to perform at 4x the frame rate of the previous OCe11100 family. Not to say that the OCe11100 family was bad, in fact, it had a higher frame rate than any other adapter with hardware storage offloads in the current market. There is an industry standard testing procedure which guidelines fair practices for testing Ethernet devices</description>
      </item>
      <item>
         <title>What is a native mode driver in VMware vSphere ESXi 5.5?</title>
         <link>https://www.broadcom.com/company/blog/native-mode-driver-vmware-vsphere-esxi-55</link>
         <guid>https://www.broadcom.com/company/blog/native-mode-driver-vmware-vsphere-esxi-55</guid>
         <pubDate>July 28, 2014</pubDate>
         <description>VMware recently introduced a new driver model called native mode in vSphere 5.5. VMware ESXi 5.5 has two driver models, one called “vmklinux,” which is the legacy driver model and the second is the new “native” mode driver model. Moving forward, Emulex supports the native mode driver model for ESXi 5.5. Emulex Fibre Channel (FC) adapters for FC/ Fibre Channel over Ethernet (FCoE) storage protocol support the inbox native mode “lpfc” driver. The Emulex Ethernet (or Network Interface Card (NIC)) functionality has an inbox native mode driver called “elxnet.” The only driver as of this writing that supports legacy mode or vmklinux-based is the “be2iscsi” driver for iSCSI support. What does this mean? Some of those changes can potentially have an impact on your migration or upgrade plan for ESXi 5.5. Some of those changes are parameter settings being used with ESX 5.1 or older versions and ethtool, a popular network tool used by network administrators. When planning to update or migrate over to ESXi 5.5 from an older ESXi version, you can configure Emulex adapters with certain driver parameters. Driver parameters such as the lpfc queue depth can be set to a lower or higher value. The driver parameter will need to be backed up manually before the upgrade or migration and re-entered manually after the upgrade or migration is done. Why is that? In ESXi 4.x and 5.1, the location of the inbox Fibre Channel driver called “lpfc820” will not be found when using the esxcli plug-in in ESXi 5.5. The esxcli will look for the legacy driver called lpfc820 and will not find the inbox replacement native mode driver called “lpfc.” To simplify the process for several ESXi 5.5 hosts, creating a profile or using VMware Update Manager with the correct driver parameter settings already set, speeds</description>
      </item>
      <item>
         <title>Extend Data Center Efficiency and Time-to-service to HP ProLiant Servers | Connect and Converge with HP</title>
         <link>https://www.broadcom.com/company/blog/extend-data-center-hp-proliant-servers</link>
         <guid>https://www.broadcom.com/company/blog/extend-data-center-hp-proliant-servers</guid>
         <pubDate>March 31, 2015</pubDate>
         <description>Data centers continue to build out and scale their infrastructure by adding equipment to address new or growing application workloads. Fortunately, with server virtualization, data centers are able to reduce capital costs by dynamically reassigning server workloads-on-demand and enable higher virtual machine (VM) densities. However, increasing VM density and moving VMs to or from a server residing on a different network (including the cloud) can reduce IT agility, operations efficiency and time-to-service availability. At the server, CPU utilization is affected by the amount of time a server spends figuring out where workloads go across a larger virtual network. This also affects storage operations, as the magnitude of data affects application performance and storage I/O performance, delivering a less than fluid experience. The challenge becomes ensuring enough bandwidth is available for network virtualization, while reducing CPU involvement for less-CPU intensive tasks like migration. Storage must meet these varying demands for both physical and virtual resources. Furthermore, scaling traditional workloads across large layer 2 networks and into the virtual fabrics creates additional network addressing complexity and management challenges to ensure performance remains unaffected. The new HP Ethernet 10Gb 2-port 557SFP+ PCI Express (PCIe) 3.0 Network Interface Card (NIC) for the HP ProLiant Gen9 rack and tower servers delivers the functionality and performance to address these challenges. By utilizing Emulex’s latest 10GbE technology in the latest generation PCI 3.0 format, HP is delivering leading edge performance, value, and simplicity with advanced features and functionality for enterprise virtualization and hybrid cloud deployments. HP Ethernet 10Gb 2-port 557SFP+ adapter for HP ProLiant The HP 557SFP+ NIC expands the broad HP/Emulex portfolio of I/O connectivity solutions for HP ProLiant Gen9 servers, delivering comprehensive coverage across form factors, as well as networking and storage protocols. The HP 557SFP+ NIC is the only adapter on the market with</description>
      </item>
      <item>
         <title>How to optimize your SAN resource utilization and save $200K per year!</title>
         <link>https://www.broadcom.com/blog/how-to-optimize-san-resource-utilization</link>
         <guid>https://www.broadcom.com/blog/how-to-optimize-san-resource-utilization</guid>
         <pubDate>October 31, 2011</pubDate>
         <description>Administrators are seeking creative ways to get more from their existing infrastructure. A recent survey of CIOs reveals what many already know; IT budgets are stagnant or shrinking. At a time of explosive growth and increased demand for performance, organizations are being pushed to innovate to survive. Given the limited ability to grow, administrators look to optimize existing resources, squeezing out performance, to help them to meet demand. One strategy involves auditing to find unused or underutilized SAN-attached storage, which is something Emulex OneCommand Vision does (which we announced version 2.0 of today at SNW Europe, check out our announcement here). Inactivity on a LUN, for example, is an indication that an applications demand for storage may be changing. Each SAN-attached LUN represents a ‘chunk’ of infrastructure dedicated to a particular compute resource, such as a server. As the demand for SAN-attached resources rises, the opportunity cost of letting underutilized resources remain in place rises. Auditing for underutilized resources at the current storage tier allows administrators to reprovision costly infrastructure, moving resources to alternative storage tiers or retiring them altogether. Repurposing allows organizations to avoid or defer the costs to acquire additional capacity. Reclaiming as little as two percent of the storage infrastructure can save nearly $200,000 per year for a mid-sized SAN deployment. Let’s consider the numbers. What does it cost to provision SAN-attached storage to your application or database server? To arrive at an answer we chose ‘street prices’ for equipment typically found in a ‘mid-sized’ SAN deployment (500 servers). The general costs are: Plumbing the storage network between the server and storage array (network adapter, multi-tier network/fabric), about $8K for two redundant paths Fault-tolerant SAN-attached 4 TB LUN, approx. $8K Storage Management software, approx. $4K That’s about $20K to connect that super fast, highly available storage to</description>
      </item>
      <item>
         <title>Are we up, or are we down?</title>
         <link>https://www.broadcom.com/blog/up-or-down</link>
         <guid>https://www.broadcom.com/blog/up-or-down</guid>
         <pubDate>April 4, 2012</pubDate>
         <description>
	During our testing with HP’s ProLiant DL380 G7 server and HP’s 82E 8Gb Fibre Channel (8GFC) adapter, we encountered some 
connectivity issues with our internal infrastructure. With daily changes to our test lab infrastructure to accommodate the different tests we perform, there is always the possibility of something getting damaged along the way.

	 

	Deploying HP 8GbFC adapters with VMware ESXi 5.0 is a straightforward install since our Emulex lpfc820 drivers are already inbox . However, we did experience intermittent problems with our LUNs disconnecting and then reconnecting. With Emulex OneCommand® Manager vCenter Server plug-in, there is an option to track up and down link connectivity. This feature is not on by default. When enabled, we noticed our link status in the Tasks &amp; Events tab from vCenter Server showing one of our ports disconnecting often. First, we tried replacing the SFP and we still experienced the intermittent disconnect. Next, we replaced the Fibre cable and the problem was solved. The description in the Task &amp; Events tab will provide the WWWN of the Fibre Channel ports with a link down and up status. The image below illustrates the link up status after the cable was replaced.

	 

	


	 

	For more information, check out the latest technical whitepaper from HP, which covers some of the features with ESXi 5.0. The deployment guide entitled, VMware vSphere 5.0: 8Gb/s Fibre Channel SANs with HP ProLiant DL380 G7 Servers and HP 3PAR Utility Storage, can be downloaded from the Implementer’s Lab
</description>
      </item>
      <item>
         <title>Gen 5 (16Gb) Fibre Channel from HP and Emulex Eases Data Center Traffic Jams | Connect and Converge with HP</title>
         <link>https://www.broadcom.com/company/blog/gen5-fc-solutions-ease-data-center-trafic-jams</link>
         <guid>https://www.broadcom.com/company/blog/gen5-fc-solutions-ease-data-center-trafic-jams</guid>
         <pubDate>September 23, 2014</pubDate>
         <description>Storage networking vendors, including HP and Emulex, have announced Gen 5 (16Gb) Fibre Channel (FC) Host Bus Adapters (HBAs) and switches as early as 2011. Since then, data center managers with I/O-intensive workloads have been future-proofing their Storage Area Networks (SANs) in anticipation of Gen 5 FC hard disk drive (HDD), solid state disk (SSD) and tape storage systems. Below are five environments where data center managers are deploying Gen 5 FC storage to ease traffic jams and meet their application performance service level requirements: 1) All Flash Arrays Move the Bottleneck Back to the Server or Network—A single tray with 2TB of flash memory can handle one million I/O operations per second (IOPS), over 150x the IOPS capacity of a HDD storage array in the same form factor. End-to-end Gen 5 FC links are essential components for storage architects who want their server farms to realize the full potential of their all flash arrays. 2) Millions of People Accessing Data Warehouses Results in the Need for Gen 5 FC SANs—The need for Gen 5 FC is exploding as a completely new class of data, such as user location, is being captured from mobile devices. Farther into the future, the Internet of Everything, including wearable devices, will create yet another explosion of new data. 3) Gen 5 FC Closes the Disk-to-Disk Replication and Disk-to-Tape Backup Window—With Gen 5 FC, the connections between disk storage systems for replication, and between disk storage systems and tape libraries for backup, will be up to 4x faster than 4Gb FC. 4) Do the Math: Gen 5 FC is Mandatory for Real-Time Editing—With 4x the number of pixels, real-time editing of 4K video requires over 8Gb per second for a single stream. Gen 5 FC technology is required for any SAN expected to support real-time</description>
      </item>
      <item>
         <title>Demartek Publishes Evaluation of HP StoreFabric SN1100E 16Gb Fibre Channel HBA | Connect and Converge with HP</title>
         <link>https://www.broadcom.com/company/blog/demartek-evaluates-hp-storefabric-hba</link>
         <guid>https://www.broadcom.com/company/blog/demartek-evaluates-hp-storefabric-hba</guid>
         <pubDate>December 21, 2014</pubDate>
         <description>Fibre Channel Storage Area Networks (SANs) carry the majority of storage traffic in the enterprise data center and this technology must continue to keep pace with demanding storage applications and increasing data growth. The HP StoreFabric SN1100E 16Gb Fibre Channel (16GFC) Host Bus Adapter (HBA) addresses these increasing demands on storage performance by providing double the bandwidth of previous generation Fibre Channel HBAs. For this evaluation, Demartek deployed an HP ProLiant DL380 Gen8 Server with the HP StoreFabric SN1100E dual-port 16GFC HBA and connected this server via a Brocade 6510 16GFC switch to an HP StoreServ Storage 7450 all-flash array with eight 16GFC host ports. Demartek ran a read-intensive, data warehouse workload based on the TPC Benchmark standard to determine whether this type of workload could take advantage of the increased bandwidth and performance that Gen 5 (16Gb) Fibre Channel provides. They repeated the database workload test with a previous-generation 8Gb dual-port Fibre Channel HBA, 8GFC switch and eight 8GFC storage ports and compared the results. As a final point of analysis, Demartek tested the same workload again, replacing portions of the 16GFC infrastructure with 8GFC optics to create a mixed speed environment. Key findings of the evaluation are below: Demartek confirmed that for the database workload used in testing, the 16GFC infrastructure created through the HP StoreFabric SN1100E HBA, Brocade 6510 FC switch, and 16GFC targets on the storage array exceeded the performance of the same workload in an8GFC environment. The additional bandwidth available to the database workload enabled the job to complete in significantly less time, with marked reduction in I/O latency. Demartek also confirmed that the 16GFC components provided enhanced performance, even in the mixed 16GFC/8GFC configuration. The HP StoreFabric SN1100E 16GFC HBA with Brocade 6510 end-to-end results include the following: The real database workload was completed</description>
      </item>
      <item>
         <title>NVGRE with the Emulex OCe14000 Adapters: A peek under the hood</title>
         <link>https://www.broadcom.com/blog/nvgre-emulex-oce14000-adapters-peek-hood</link>
         <guid>https://www.broadcom.com/blog/nvgre-emulex-oce14000-adapters-peek-hood</guid>
         <pubDate>October 27, 2014</pubDate>
         <description>Large scale virtualization and cloud computing, along with the need to reduce the costs of deploying and managing new servers, are driving the popularity of overlay networks. Network Virtualization using Generic Routing Encapsulation (NVGRE) is a virtualized overlay network architecture, which is designed to support the multi-tenant infrastructure in public/private/hybrid clouds using encapsulation and tunneling to create large numbers of virtual LANs (VLANs) for subnets that can extend across dispersed data centers and layer 2 (the data link layer) and layer 3 (the network layer) networks. The NVGRE header contains 24 bit Tennant Network Identifier (TNI) which allows up to 16 million logical networks on the same management domain. This also allows the host to identify the customer virtual machine (VM) for any given packet. Figure 1: NVGRE Ethernet Frame Contents The Emulex OneConnect® OCe14000 series adapter offers the NVGRE offload capability and can be easily deployed in an NVGRE environment. NVGRE offload provides: 1) Checksum offload for encapsulated packets, including checksum calculations in IPv4 headers and in the UDP/TCP header. 2) Checksum offload verification in IPv4 headers and in the UDP/TCP header. 3) Large segment offload (LSO) for encapsulated packets based on inner packet information. 4) Packet steering using inner header MAC+ Virtual Subnet Identifier ([VSID], included in the GRE header) information. The Emulex OCe14000 series adapter when utilized in a Microsoft Hyper-V virtualized network provides a scalable, multi-tenant cloud solution by virtualizing the IP addresses used by VMs. Multiple customer networks can run on top of the same physical network. Below is a sample configuration which was implemented and tested in the Emulex lab environment. Figure 2: Two hosts connected to a 10Gb Ethernet (10GbE) networking switch Figure 2: Two hosts connected to a 10Gb Ethernet (10GbE) networking switch The implementation/configuration of NVGRE with the OCe14000 series adapter</description>
      </item>
      <item>
         <title>iEye or “Don’t look into the laser with your good eye”</title>
         <link>https://www.broadcom.com/blog/ieye-dont-look-laser-good-eye</link>
         <guid>https://www.broadcom.com/blog/ieye-dont-look-laser-good-eye</guid>
         <pubDate>June 21, 2011</pubDate>
         <description>Back when I first started with Emulex there used to be a sign in the old engineering lab that read “Don’t stare into the laser with your good eye”. Sort of sick humor, especially if you may have injured your other eye by making the same mistake previously. In any case, that sign did its job. Not knowing any hard facts regarding laser safety, I have just always avoided looking directly into the optics of a HBA port or switch; after all, I didn’t want to be the first person I knew to have a Fibre Channel-related injury. Little did I know that the safety hazard wasn’t that great for Fibre Channel devices, but it seems that many others have felt the same as me so clever solutions have been devised to verify if the laser is working. Fibre Channel Host Bus Adapters (HBAs), Converged Network Adapters (CNAs), switches and arrays that have optical interfaces use lasers for signal transmission. Network communication using fiber optics require that the laser signal be uninterrupted with limited loss from the source as it travels over multiple hops, patch panels, and switches to the destination. Troubleshooting physical connectivity problems in large, multihop optical networks is actually pretty simple—you just need to validate that the laser light is travelling end-to-end without interruption. The LEDs on the HBA or switch will fundamentally indicate if the link is up or down, but the laser will still transmit light on most current Fibre Channel devices even if the link is down (no blinking LEDs). Fiber optic sources, including Fibre Channel HBAs and switches, use class 1 lasers and are generally too low in power to cause any eye damage, but it’s still a good idea to check connectors with a power meter before looking into it. Some telco</description>
      </item>
      <item>
         <title>New Application Note: How to configure the Emulex Virtual Fabric Adapter in vNIC mode for VMware ESX 4.1</title>
         <link>https://www.broadcom.com/blog/app-note-configure-virtual-fabric-adapter-vnic-mode-vmware-esx</link>
         <guid>https://www.broadcom.com/blog/app-note-configure-virtual-fabric-adapter-vnic-mode-vmware-esx</guid>
         <pubDate>July 6, 2011</pubDate>
         <description>
	Emulex Technical Marketing created a deployment guide called 
Deploying 8Gb/s Fibre Channel with IBM System X and VMware and you can find it posted on the Implementer’s Lab website. This step-by-step guide talks about how to configure the Emulex Virtual Fabric Adapter for IBM System x in pNIC mode. However, we know that VMware ESX/ESXi 4.1 hosts usually need more than two NICs to meet the requirements for some of the features offered by VMware vSphere. Configuring the Emulex Virtual Fabric Adapter in vNIC mode is clearly the way to go. The deployment guide, unfortunately, was written before the driver for the Virtual Fabric Adapter in VMware ESX 4.1 supported both pNIC and vNIC mode.

	 

	We’re happy to report that the driver for the Virtual Fabric Adapter in VMware ESX 4.1 now supports vNIC mode. So, as an addendum to the deployment guide, we wrote an Application Note and posted it on the Implementer’s Lab: How to configure the Emulex Virtual Fabric Adapter in vNIC mode for VMware ESX 4.1. This App Note lists the steps you should take to deploy the Emulex Virtual Fabric Adapter in vNIC mode.

	 

	Any further thoughts or questions on deploying 8Gb/s Fibre Channel with IBM System x and VMware, or configuring Virtual Fabric Adapters in vNIC mode? We’d love to hear from you on this topic or any other deployment or configuration question you may have. Post a comment here, or email us directly at implementerslab@emulex.com.
</description>
      </item>
      <item>
         <title>Big Data Solutions at Emulex</title>
         <link>https://www.broadcom.com/blog/big-data-solutions</link>
         <guid>https://www.broadcom.com/blog/big-data-solutions</guid>
         <pubDate>February 9, 2012</pubDate>
         <description>I’m sure we’ve all heard some of these staggering internet and data statics that are making the rounds, right? Something along the lines of… 48 hours of video are uploaded to YouTube every minute (source) 8 trillion text messages were sent in 2011 (source) An estimated 100 billion photos have been posted on Facebook (source) Twitter logs over 250 million tweets per day (source) We create 2.5 quintillion (2.5 x 10^18 ) bytes of data on a daily basis (source) 90% of the data in existence today has been created in the last two years (source) For a little perspective, IDC has said that it has taken almost 60 years for disk drives to reach 1.7 zettabytes (ZB) of data in the Storage Universe and they expect that to almost quintuple to more than 8ZB by 2015. They also predict that 90% of this new data will be video and pictures¹. The sheer scale of this boom in data growth, and the sources driving it, such as consumer participation in the web, social media, mobile applications, credit card and banking transactions, and high frequency trading, just to name a few, show no signs of slowing down. In the traditional sense, “Big Data” has been used to describe massive amounts of data controlled and analyzed by huge organizations like Google. Below that echelon of organizations, “Big Data” is a relative term, proportionate to the size of an organization. Regardless of size, sector, or vertical, the exponential growth of data and focus on data analytics has prompted companies to adopt Hadoop – to uncover new and valuable information from unstructured data sets, and turning that into a competitive advantage for their business. The proliferation of data, and data sources, is exactly why this topic is growing in popularity – it relates to</description>
      </item>
      <item>
         <title>Is it time for SSD in the data center? You bet your OPEX!</title>
         <link>https://www.broadcom.com/company/blog/time-ssd-data-center-bet-opex</link>
         <guid>https://www.broadcom.com/company/blog/time-ssd-data-center-bet-opex</guid>
         <pubDate>May 3, 2013</pubDate>
         <description>Recently, we published a slideshow on IT Business Edge titled “Five reasons why HDD is dead and SSD is taking over.” Provocative? Sure, but that was the point. Do I really think the hard disk drive (HDD) market is dead? Not that it matters what I think, but no… EMC, IBM, HP and a huge number of storage vendors continue to sell massive quantities of HDDs every day and will continue doing so for the foreseeable future. However, recently it feels like we are rapidly approaching Gladwell’s tipping point where “ideas and products and messages and behaviors spread like viruses do1.” Well, the pandemic that is solid state disk (SSD) sure seems to fit that criteria. Come on, would IBM bet one BILLION dollars in something that is just a fad? In the immortal words of Ron Popeil, “but wait, there’s more,” it seems like every analyst on the planet is now talking as if the use of SSD/flash in servers and storage is becoming a de facto standard. So, what is the use case? While SSDs began appearing in servers in recent years as local storage, the idea that they could effectively replace storage area networks (SANs) began to fade when users realized that large databases, virtual environments, and big data analytics required lots of servers touching common shared storage. The use case for this local flash storage morphed into server-based caching, which is how companies such as EMC and Fusion-io are now positioning their PCI Express (PCIe)-based flash adapters. Also in recent years, these SSDs began appearing in the storage fabric in at least three use cases – SSD front-ending traditional spinning disks in a storage enclosure (hybrid arrays), SSDs in a fabric-based appliance front-ending a traditional spinning disk array, and all flash arrays as primary storage. Most</description>
      </item>
      <item>
         <title>N_Port ID Virtualization (NPIV) | The Implementer's Blog</title>
         <link>https://www.broadcom.com/blog/nport-id-virtualization-npiv</link>
         <guid>https://www.broadcom.com/blog/nport-id-virtualization-npiv</guid>
         <pubDate>November 27, 2011</pubDate>
         <description>
	
Recently, I was asked how to enable N_Port ID Virtualization (NPIV) for our high-performance Emulex OneConnect 10Gb Universal Converged Network Adapters (UCNAs) configured for Fibre Channel over Ethernet (FCoE) Searching through the Emulex documentation pages as the requester did, I was also unable to locate any information on this configuration. I didn’t think this could be any more difficult than configuring Fibre Channel, so I thought I’d take a stab at it. A Microsoft Windows Server 2008 host was used with an Emulex OneConnect OCe10102 adapter, with Emulex OneCommand Manager 5.2.12.1 and 5.2.12.2 for one FCoE port. Since our adapters have two ports, you would perform the steps below for the second port.

	Here we go:

	 

	
		Open OneCommand Manager and select “View” from the drop down menus and select “Group” by adapters
	
		Select the FCoE port
	
		Select the Driver Parameters tab
	
		From the Adapter Parameter, left mouse click once to select Enable NPIV

	
		Select “Enable” from the Modify Adapter Parameter. This will make the Adapter Parameter turn red, requiring a reboot. Because it will only enable one port, a reboot will also be required for the second port.

	
		Select “Apply” and reboot the server
	
		When the server comes back up, login to your Windows server and open OneCommand Manager
	
		Select “View” then “Group Adapters by Virtual Port”

	
		Select the FCoE port and you should now be able to create your virtual ports

	
		Select “Create Virtual Port” and a new virtual port confirmation window will appear

	
		As shown in the image below, the new virtual port will appear just below the physical port



	 

	I hope this helps. If you still have questions, please contact Emulex technical support.
</description>
      </item>
      <item>
         <title>Taking NVGRE to the Next Level</title>
         <link>https://www.broadcom.com/blog/nvgre-next-level</link>
         <guid>https://www.broadcom.com/blog/nvgre-next-level</guid>
         <pubDate>June 12, 2012</pubDate>
         <description>TechED is Microsoft’s premier technology conference for IT professionals and developers, offering the most comprehensive technical education across Microsoft’s current and soon-to-be-released suite of products, solutions, tools, and services. We are excited to be here to talk about Microsoft technologies such as RSS, Single Root I/O Virtualization (SR-IOV), virtual machine queue (VMQ), Receive Segment Coalescing (RSC), and many more for the upcoming Windows Server 2012 features. We also have one more thing … We are showing the first results of new technology called Network Virtualization using GRE or NVGRE, based on our joint RFC submission to the IETF. Emulex, Microsoft, and others submitted NVGRE to address new networking requirements for virtualized environments. According to the draft RFC: “We describe a framework for policy-based, software controlled network virtualization to support multitenancy in public and private clouds using Generic Routing Encapsulation (GRE). The framework outlined in this document (the RFC) can be used by cloud hosters, enterprise data centers, and enables seamless migration of workloads between public and private clouds.” At TechEd 2012, Emulex is showing a prototype FPGA technology demonstration of NVGRE. This is an important next step for NVGRE to become a viable solution. Microsoft published a call to action at their Build Conference in September of 2011 for NIC vendors to “implement GRE compatible hardware offloads.” The Emulex prototype FPGA technology demonstration of NVGRE is shown below. The demo is simple. All we do is set up the virtual machines (VMs) without NVGRE enabled, with the configuration above, and the VMs cannot communicate. This is shown when a simple PING operation doesn’t work. Then we enable NVGRE through PowerShell scripts and set up the policies for the two different virtual networks. Once the NVGRE is configured, the VMs on a specific TNI (Tennant Network Identifier) can see each other.</description>
      </item>
      <item>
         <title>16G Fibre Channel: Bigger and Badder FC For Virtualization, Cloud and Database Applications</title>
         <link/>
         <guid/>
         <pubDate>January 19, 2012</pubDate>
         <description>Most virtualization deployments rely on storage area networks (SANs) for flexible shared storage solutions to meet mobility, performance, scalability and efficiency requirements. As many data centers take the next steps in virtualizing big I/O applications, like databases, and move to more scalable private clouds, storage networking has become the primary bottleneck for Quality of Service (QoS) and scalability. The new Emulex LightPulse16G Fibre Channel (16GFC) Host Bus Adapters (HBAs) fix that bottleneck, enabling the best QoS for the highest virtual machine (VM) density with the fewest ports and cables and the lowest power footprint. Additionally, the entire SAN fabric benefits from higher availability and reduced power requirements leveraging a faster HBA. Because of better performance as well as streamlined management and backward compatibility, Emulex 16GFC HBAs is the best solution for virtualized environments. Here is what you can expect when upgrading to Emulex 16GFC HBAs: 5x the IOPS Twice the data throughput Cuts application I/O response time in half Up to 4x the IOPS for typical 4K/8K I/O block database applications 3x the IOPS performance per watt Maximum VM density with increased N_Port ID Virtualization (NPIV) virtual ports (vPorts) True cloud scalability, with support for up to 255 virtual functions and 1024 MSI-X and 8192 logins and open exchanges for maximum VM density—up to 4x more than other 16GFC adapters Unmatched native manageability with Emulex OneCommand Manager for VMware vCenter – enables adapter management directly from the vCenter console, delivering 2x adapter management functionality and taking half the time to install and manage compared to other adapters End-to-end data integrity with BlockGuard™ hardware offload – supports the T10 Protection Information (T10-PI) standard to protect against silent data corruption, without the 30-40% performance tax incurred by other firmware-based T10-PI solutions If you’d like to learn more about 16GFC technology, join our</description>
      </item>
      <item>
         <title>The Benefits of Network Virtualization Offload Technologies to Optimize Performance for VXLAN | Emulex Labs</title>
         <link>https://www.broadcom.com/company/blog/the-benefits-of-network-virtualization-offload-technologies</link>
         <guid>https://www.broadcom.com/company/blog/the-benefits-of-network-virtualization-offload-technologies</guid>
         <pubDate>June 3, 2013</pubDate>
         <description>You may have seen my previous post regarding performance for Emulex Virtual Network eXceleration™ (VNeX) stating how hardware offloads can significantly improve network throughput while reducing CPU overhead for Virtual Network Fabrics (VNFs). Today, at VMworld 2013, Emulex announced forthcoming VNeX support for vSphere 5.5 to provide offloads for VXLAN based VNF. As we have discussed before, VXLAN or Virtual eXtensible Local Area Network (as defined in RFC) defines how to build virtual networks in vSphere environments. VNFs create virtual network infrastructure where a virtual machine (VM) can be created and moved without any limitations that would be imposed by the legacy network infrastructure. VNFs create a new data center networking paradigm that is a game changer in terms of unlocking the flexibility and adaptability of server virtualization and making the network as easy to manage and configure as a virtual server. This is great news! The innovation that Emulex brings to the table is the first hardware offloads that will be integrated into the new vSphere 5.5. This is important because we need to ensure that virtual networks run as fast as possible, while freeing up memory and CPU resources to support more VMs. Emulex has done early engineering testing on the performance improvements of hardware offloads for VXLAN. The data is shown in the table below: Traffic Type VXLAN Traffic Performance Improvement VXLAN Offload Enabled Disabled Tx (Gbps) 9.16 7.83 20% Rs (Gbps) 9.34 6.00 56% Bi-Di (Gbps) 14.8 6.41 130% Note: These test results are illustrative in nature and will vary based on VM density, server configuration, and other test parameters. Formal test results will be provide at at a later date. Given the traffic pattern, VM workload, and CPU utilization for the server in this test, we saw up to a 130% improvement in throughput. Much</description>
      </item>
      <item>
         <title>Emulex OneConnect Adapters Are “Open” for Business</title>
         <link>https://www.broadcom.com/blog/oneconnect-adapters-open-business</link>
         <guid>https://www.broadcom.com/blog/oneconnect-adapters-open-business</guid>
         <pubDate>April 1, 2014</pubDate>
         <description>…For data centers built on Open Compute Project (OCP) designs that is! The Open Compute Project Foundation is a rapidly growing community of engineers around the world whose mission is to design and enable the delivery of the most efficient server, storage and data center hardware designs for scalable computing. The OCP designs are maximized for total cost of ownership (TCO), energy efficiency, and reduced complexity in the scalable computing space. To that end, Emulex has released the OCm14000-OCP series of Ethernet and Converged Network Adapters (CNAs) for data centers utilizing OCP-based server designs. By taking a quick look at the chart below, you will see that Emulex has packaged up some very nice advanced features for cloud and enterprise network use as well as for converged data centers–features that are above and beyond what you’ll find in other competitive products. Specifically, data centers built upon OCP designs that utilize Emulex OCm14000-OCP series adapters can now take advantage of a powerful set of features and capabilities, including: Open Enablement of Software-defined Networking (SDN): The recently introduced Emulex SURF open API, provides the tools needed to implement SDN technology that can be optimized for next generation applications and new industry standards, such as OpenStack, CloudStack and OpenFlow. High-Performance Virtualization: OCm14000-OCP adapters uses highly efficient and scalable hardware offload technology to transfer the overhead of virtual networking, providing up to 50 percent better CPU utilization1 compared to standard NICs when used for VMware VirtualWire connection, thereby increasing the number of VMs that can be supported per server. In addition, the OCm14000-OCP adapters deliver a fundamental 4x performance increase in small packet network performance,2 which is required to scale transaction-heavy and clustered applications. Rapid, Secure and Scalable Cloud Connectivity: Emulex Virtual Network Exceleration™ (VNeX) offload technology provides up to 70 percent better performance1</description>
      </item>
      <item>
         <title>Hello Sao Paulo: Digital TV Takes Over in Brazil</title>
         <link>https://www.broadcom.com/blog/home-entertainment/hello-sao-paulo-digital-tv-takes-over-in-brazil/</link>
         <guid>https://www.broadcom.com/blog/home-entertainment/hello-sao-paulo-digital-tv-takes-over-in-brazil/</guid>
         <pubDate>July 30, 2012</pubDate>
         <description>Brazil is ready for a change in how it watches TV.Brazilian households that will switch from analog to digital terrestrial (DTT), or over-the-air, are expected to grow by a staggering 400 percent in the next few years. Today at ABTA the top broadcasting show for pay-TV and broadband in Latin America Broadcom is helping to usher Brazil into the future of TV. By supporting existing terrestrial broadcasts, Broadcom's new technology delivers ISDB-T digital TV broadcasts with lower size, cost and power.What that means for Brazilian TV operators is that they'll will get a chance to check out the benefits of Hybrid TV, which combines DTT with satellite, cable or IPTV to deliver totally new services like video-on-demand and DVR. Learn more about Broadcom's latest chips for the Latin American market: the BCM3471 and the BCM3472. Broadcom's solutions also feature unique technologies to help operators overcome issues associated with the transition to digital TV content, including slower channel changes and annoying variations in volume levels. Broadcom's FastRTV fast channel change technology speeds up channel flipping times to be nearly instantaneous, and support for Adaptive Volume Leveling technology mitigates any disparate volume levels between commercials and programming. Also, digital TV in Latin America is getting interactive.Picture apps and over-the-top content flowing together with traditional broadcast shows and movies for a true Internet-based TV experience. For a growing number of TV subscribers in Latin America, TV will become an interactive hub.Consumers can check their online bank account, purchase the latest designs spotted on a favorite telenovela character, or learn more about the upcoming World Cup. Internet-enabled cable and set-top-boxes allow streaming content to multiple devices, such as smartphones, laptops and tablets.Now, consumers can take their content to-go and never miss a soccer game or favorite TV show.With key home networking standards integrated in</description>
      </item>
      <item>
         <title>TV to Any Screen: DIRECTV's Media Center is Named Innovation Finalist</title>
         <link>https://www.broadcom.com/blog/tv-to-any-screen-directvs-media-center-is-named-innovation-fina</link>
         <guid>https://www.broadcom.com/blog/tv-to-any-screen-directvs-media-center-is-named-innovation-fina</guid>
         <pubDate>June 7, 2012</pubDate>
         <description>TV is on the move, with broadcasts heading to multiple screens, including mobile devices.Now, DirecTV is launching a Broadcom-powered set-top box that enables the liberation of TV programming and more.Called DIRECTV Home Media Center system, the box allows users to record five shows at once, store up to 800 hours of content and watch them in any connected room.

It features RVU technology so users can enjoy DIRECTV programming and HD DVR functionality on multiple compatible Smart TVs without the need of additional devices.

DIRECTV's Home Media Center has been recognized by its peers and named a finalist for the Innovation Award at IBC, the premier broadcasting show in Amsterdam in September.

The Broadcom chip inside -the BCM7400 HD STB SoC - opens the way for fast advanced entertainment while reducing overall cost and power requirements for deploying set-top boxes.Powered by Broadcom, the Media Center is an example of how DIRECTV is pushing the envelope in providing its customers with the best viewing and user experiences.

Stay tuned to see if the device becomes an award-winner.Winners will be revealed at an awards ceremony at the IBC Show in Amsterdam on Sunday, September 9.

Related posts:

	Digital TV Goes Global
	Innovation powers HDTV and Pay-TV to reach larger audiences
</description>
      </item>
      <item>
         <title>The Next Frontier of Car Technology: Connecting to Everything</title>
         <link>https://www.broadcom.com/blog/the-next-frontier-of-car-technology-connecting-to-everything</link>
         <guid>https://www.broadcom.com/blog/the-next-frontier-of-car-technology-connecting-to-everything</guid>
         <pubDate>July 30, 2015</pubDate>
         <description>With Wi-Fi and Bluetooth connectivity becoming more common in todays cars, its only a matter of time before the next-generation of automotive connectivity starts kicking into gear.Next-up are cars that talk to each other, to road sensors and even to our own bodies. The technology is called Vehicle-to-Everything, or V2X, and the push behind its adoption is primarily a safety one.Studies have found that accidents can be greatly reduced when cars are able to read each others speeds, lane positions, brake status and steering-wheel position, among other things.With a constant analysis of its surroundings, the car itself is able to identify potential trouble whether a car with worn brakes in the next lane or a pedestrian approaching the crosswalk ahead faster than even the most alert or careful driver, or the best sensor system. But for V2X to work effectively, it needs reliable and secure wireless technologies, including Wi-Fi and low-energy Bluetooth (or Bluetooth Smart) that Broadcom has been enhancing and improving for years.It also requires a government-backed 802.11p Wi-Fi standard called Wireless Access in Vehicular Environments, or WAVE, which enables data exchange between vehicles and the roadside infrastructure that operates in the 5.9 GHz band of the wireless spectrum. Linking those technologies to each other with automotive-grade silicon is where Broadcom sets itself apart. Broadcom's strength is in integrating multiple wireless radios into a single-chip package, said Richard Barrett, director of wireless connectivity at Broadcom.We are able to integrate Bluetooth low energy and Wi-Fi and enable them to coexist with other radio standards. [cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_left&quot;] Bluetooth the low-cost, low-power wireless standard that consumers are already familiar for connecting their phone to their car radio is set to become increasingly important in V2X communications, specifically as it relates to the cars communication to people.Through Bluetooth, every pedestrian who carries</description>
      </item>
      <item>
         <title>CES 2015: Let NFC Personalize Your Ride</title>
         <link>https://www.broadcom.com/blog/ces-2015-let-nfc-personalize-your-ride</link>
         <guid>https://www.broadcom.com/blog/ces-2015-let-nfc-personalize-your-ride</guid>
         <pubDate>January 5, 2015</pubDate>
         <description>Consumers know that the ultimate mobile device is the one thats on four wheels and Broadcom's technology is increasingly geared toward bringing the smartphone experience to the car. Today Broadcom at the International Consumer Electronics Show announced the BCM89095, an which offers automotive-grade support for the popular tap-to-activate technology called Near Field Communication (NFC). [cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_right&quot;] Tech-savvy consumers will likely recognize NFC as a convenient way to pair up devices (such as a Bluetooth headset with a mobile phone) or as a secure method of making mobile payments. What they might not know is that NFC also opens up a slew of new use cases for personalizing your ride.Were not talking about adding subwoofers or souping up your engine but rather, using your mobile device to let your car know, well, that you are you. Its really following the trend that we are seeing moving forward, where wireless connectivity is becoming increasingly important among automakers, said Richard Barrett, Broadcom Director of Wireless Connectivity. Within a few years, NFC is expected to become a standard feature in smartphones.In a February report, the market researcher IHS projected that 1.2 billion NFC-enabled handsets would ship by 2018. This is against the larger backdrop of wireless connectivity becoming a critical requirement in transferring content from mobile devices to the vehicle infotainment system. By leveraging NFC technology, drivers can pair a mobile device by simply tapping it against the windshield or dashboard, rather than navigating menus on both the mobile device screen and in-car center console screen. They can get all of their content such as streaming music channels from a smartphone or video content from a tablet and easily share it with the cars infotainment system. Integration of NFC in the windshield allows an NFC-enabled digital key to exchange data such as authentication,</description>
      </item>
      <item>
         <title>5G WiFi Unveiled at CES, Industry Already On-Board</title>
         <link>https://www.broadcom.com/blog/5g-wifi-to-be-unveiled-at-ces-industry-is-already-on-board</link>
         <guid>https://www.broadcom.com/blog/5g-wifi-to-be-unveiled-at-ces-industry-is-already-on-board</guid>
         <pubDate>January 5, 2012</pubDate>
         <description>One of the benefits of making a Wi-Fi announcement at the Consumer Electronics Show in 2012 is that it's no longer necessary to explain the concept of the wireless technology.Wi-Fi has become a household term that consumers recognize as describing a shared wireless Internet connection to many of their devices - from laptop computers to smart phones and tablet PCs.And the lineup of WiFi-enabled devices is growing as televisions, set-top boxes and gaming consoles tap into the signal. Just in time to save us all from the seemingly inevitable consequence of adding all of these devices - deteriorating performance, choppy videos and slow load times - Broadcom is unveiling the first family of chips using the emerging IEEE 802.11ac standard, also known as 5G or fifth-generation WiFi. The chips are three times faster than their predecessor and up to six times more power-efficient than 802.11n chips, and are designed for a broad range of product segments .They're particularly evolutionary when it comes to handling the explosive increase in consumption of online video and other media types. Through the technology, the range of the wireless signal in the home is dramatically improved, allowing consumers to watch HD-quality video from more devices in more places - at the same time.The increased speed opens the door for faster downloads and synchronization of large video files - such as HD video - to mobile devices.As an added bonus, the faster speeds also reduce power consumption.Because the volume of data is transferred at an exponentially faster rate, downloads are quicker, allowing the devices to enter low-power mode faster. The 5G WiFi offerings from Broadcom include an 80 MHz channel bandwidth that is twice as wide as current offerings.They also run at a higher modulation scheme, which increases the efficiency of data transfer.Finally, they not only</description>
      </item>
      <item>
         <title>Global Markets like Broadcom Technology for Smartphones, Set-top Boxes and Smarter TVs</title>
         <link>https://www.broadcom.com/blog/global-markets-like-broadcom-technology-for-smartphones-set-top</link>
         <guid>https://www.broadcom.com/blog/global-markets-like-broadcom-technology-for-smartphones-set-top</guid>
         <pubDate>January 11, 2012</pubDate>
         <description>As an international conference, the Consumer Electronics Show attracts thousands of people from outside the United States every year.Because technology is a global industry, Broadcom is a global company.

On Tuesday in Las Vegas, the Broadcom booth came to life as the show officially opened and the attendees started pouring in.Later in the day, executives from the Broadband Communications, Mobile and Wireless, and Infrastructure and Networking business groups hosted a press conference to not only recap the day's announcements, but also offer some analysis on how the announcements affect Asian markets.

[caption id=&quot;attachment_640&quot; align=&quot;alignright&quot; width=&quot;300&quot;] Broadcom's Ali Abaye explains advanced auto technology to global reporters at the 2012 International CES[/caption]

Certainly, the announcements around 5G WiFi, as well as Broadcom's advancements in Ethernet-based automotive technology, would have international interests.But technologies around smartphones, for example, have a greater appeal to Asia-Pacific markets.

Consider a news release issued that same day about Broadcom's 1GHz.3G smartphone baseband and reference design, which is already gaining traction with customers in China because it enables the advanced features on affordable mass-market smartphones.

Broadcom also is accelerating China's network convergence with its DOCSIS-based EoC solution for cable broadband and EPON/GPON technology.Already, major Chinese cable operators Topway and Wasu have completed their trials while a test in Gehua is underway, trends that indicate strong momentum for Broadcom's offerings.

Finally, in India, Broadcom has set-top box solutions to help digitization efforts in India.The technology enables the conversion to be quick and provides consumers with new, fast and more responsive services.And the offerings support network video on demand, High Definition, FastRTV fast channel change, DVR and personal media sharing.

These are examples of how Broadcom technology crosses into multiple product segments in different regions to keep consumers connected in the home, hand and on the go.</description>
      </item>
      <item>
         <title>Gen 5 Fibre Channel. What’s In A Name?</title>
         <link>https://www.broadcom.com/blog/gen-5-fibre-channel-whats-name</link>
         <guid>https://www.broadcom.com/blog/gen-5-fibre-channel-whats-name</guid>
         <pubDate>June 12, 2013</pubDate>
         <description>Emulex has moved to a new generational-based naming scheme for its LightPulse Fibre Channel Host Bus Adapters (HBAs). So why the change? The currently shipping fifth generation (Gen 5) Fibre Channel HBAs pack more than just a 16Gb punch as the previous name leads you to believe. LightPulse Gen 5 Fibre Channel HBAs have been designed from the ground up for the virtualization, cloud and database era, taking Fibre Channel SANs to the next level by delivering more than just incredible performance and higher throughput. Emulex Gen 5 HBAs deliver: Increased SAN reliability, data integrity and availability Management simplicity Reduced operational costs across the data center Multi-speed adapters Performance acceleration for virtualization and mission critical applications As you can see, Gen 5 is so much more than incredible performance. Fibre Channel development has evolved as data center needs have changed. Traditionally storage was measured by Gb/$ metrics, and many technologies have been developed that have helped address the Gb/$ requirement such as data-dedupe technologies, and flash-based storage. Today, data centers are challenged with how to accelerate application performance, maximize investment in costly applications licenses and assure quality of service (QoS) for mission-critical applications, cloud and virtualized deployments. These days, I/O operations per second (IOPS)/$ is a more frequently used metric. Gen 5 Fibre Channel has addressed the performance acceleration challenge by delivering up to 6x more IOPS, while reducing latency by up to 75%, drastically speeding up access to storage compared to the previous generation 8GFC HBAs. Labs tests comparing 8GFC and Gen 5 (16GFC) HBA performance have shown that by simply upgrading to Gen 5 HBAs, data centers can get a big application performance boost. Considering the incremental cost to upgrade to Gen 5 HBAs is a few hundred dollars per server, this upgrade provides a simple, cost effective</description>
      </item>
      <item>
         <title>Web Apps Find a Secure Path to the Living Room TV</title>
         <link>https://www.broadcom.com/blog/home-entertainment/web-apps-find-a-secure-path-to-the-living-room-tv/</link>
         <guid>https://www.broadcom.com/blog/home-entertainment/web-apps-find-a-secure-path-to-the-living-room-tv/</guid>
         <pubDate>March 12, 2012</pubDate>
         <description>To understand the power of the latest technology to be unveiled by Broadcom, first think about how apps have redefined the mobile computing experience.Surfing the Web over a mobile device like a tablet PC or a smartphone no longer involves visiting a &quot;site&quot; on a web page.Today, mobile apps are the gateways to the on-the-go Internet, offering nicely organized access to games, news, tweets, videos and more. Now, as the app continues to evolve, the experience is heading for bigger screens, notably the TV screen that's mounted to the living room wall.And that's where Broadcom technology comes into play.The launch of the BCM7435 System on a Chip (SoC) marks a significant milestone in the next generation of television because it empowers the Pay-TV operators - which already have set-top boxes in millions of living rooms - to expand their lineup of offerings to include content-rich apps. They key is the security that keeps the two types of content - the Web-based content and the premium broadcast content - from exposing each other to possible vulnerabilities, such as attacks or outages.Specifically, it's the &quot;Web Domain Security&quot; element that works behind the scenes of the BCM7435 to manage the two content platforms in their secured, but separate, processing worlds and to police the interactions they have with each other. The result is a Web application that has no knowledge of and no access to any part of the inner highly-secured portion of the SoC.The benefit to the subscriber is an enriched user experience of downloadable applications, social networking, web-based widgets and more.For the media service operator, the benefit is the technology it takes to venture into new business models while preserving the security of the old one. With the recognition that tablets and smart handsets are playing an ever increasing role in</description>
      </item>
      <item>
         <title>Broadcom Survey: Consumers Want More Connections On the Go</title>
         <link>https://www.broadcom.com/blog/broadcom-survey-consumers-want-more-connections-on-the-go</link>
         <guid>https://www.broadcom.com/blog/broadcom-survey-consumers-want-more-connections-on-the-go</guid>
         <pubDate>December 10, 2011</pubDate>
         <description>A new consumer poll conducted by JZ Analytics for Broadcom shows an increased appetite for seamless and pervasive connectivity not just in mobile devices, but in the home and car as well.Can this be right? The November 2011 poll of 1,025 consumers revealed the desire for a more connected lifestyle in the home and on the go.The survey noted that, in response to increasing consumer demands, connectivity technologies will feature prominently at the upcoming 2012 International Consumer Electronics Show to be held January 10-13 in Las Vegas. Here are some of the survey results: Connecting in the Home Respondents reported they are consuming far more video content, which eats up huge amounts of bandwidth.Still, they want to watch this content on multiple devices. Consuming online video: Two-thirds of people surveyed say they watch more than two videos a day (68 percent), while a quarter say they watch at least five videos a day (24 percent). Consuming overall digital content: 87 percent estimated that they consume more than 10 hours a week of digital content.More than half of all respondents (54 percent) consume more than 20 hours a week. Multi-screen entertainment in the home: Almost two-thirds (62 percent) said they would stream content that can normally only be watched on their TV to wireless devices such as laptops, smartphones or tablet PCs throughout the home should their cable or satellite provider offer the service. Connected televisions: Two-thirds (67 percent) are more likely to purchase a new HDTV that can easily connect directly to wireless devices such as smartphones, tablets or laptop PCs and the internet vs.one that cannot. Takeaway: Multi-screen home entertainment is becoming a reality with the benefit of technologies and standards such as Transcoding, DLNA, Wi-Fi, MoCA and Powerline Networking. Connecting on the Go People are using a wider</description>
      </item>
      <item>
         <title>Making the Smartphone Switch: Multi-Core Does So Much More</title>
         <link>https://www.broadcom.com/blog/chip-design/making-the-smartphone-switch-multi-core-does-so-much-more/</link>
         <guid>https://www.broadcom.com/blog/chip-design/making-the-smartphone-switch-multi-core-does-so-much-more/</guid>
         <pubDate>June 13, 2013</pubDate>
         <description>The smartphone industry has been coming of age this year, most recently with news that more than half of the mobile phones sold around the world were of the smartphone variety.As the adoption rate continues to climb according to expert forecasts a growing number of consumers will be clamoring for the true smartphone experience, which includes gaming, shopping, and shooting and sharing photos and videos. Around the world, Broadcom is helping phone makers to not only meet these demands but to do so at a price point that will help fuel further adoption.Today, Broadcom announced a new quad-core processor for 3G smartphones running Googles Android operating system. Related: Broadcom's Triple Threat: Turnkey Platforms Lower Smartphone Costs The BCM23550s 1.2Ghz HSPA+ cellular baseband will allow manufacturers to quickly adopt a nearly complete mobile processor platform to use with new and existing phone designs a budget-friendly approach that makes sub-$100 smartphones possible. Get More Details in the Press Release. In Southeast Asia, one of the fastest-growing emerging markets for entry-level smartphones, sales were up more than 61 percent over the past year, according to market research firm GfK, and yet only about one-third of the population has adopted smartphones, leaving a sizable portion of the potential market for manufacturers to capture and convert to smartphone users, said GfKs Gerard Tan. But thats starting to change as Broadcom offers unparalleled integration in its turnkey 3G platform, bringing all of the basics, including cellular baseband, touchscreen controller, PMU, RFIC, as well as Broadcom's connectivity technologies, such as Wi-Fi, Bluetooth, GPS/GNSS and NFC. The cherry on top is the multi-core chip BCM23550, released today, which allows device makers to stay competitive and support advanced features.The chip can handle a cameras imaging sensor of up to 12 megapixels, for instance, as well as photography interface and</description>
      </item>
      <item>
         <title>World Cup Brazil: The Beautiful Game in Beautiful Ultra High Definition</title>
         <link>https://www.broadcom.com/blog/world-cup-brazil-the-beautiful-game-in-beautiful-ultra-high-def</link>
         <guid>https://www.broadcom.com/blog/world-cup-brazil-the-beautiful-game-in-beautiful-ultra-high-def</guid>
         <pubDate>June 12, 2014</pubDate>
         <description>For TV-watching soccer fans of the beautiful game in Brazil the World Cup runneth over. That means for millions of soccer fans, it doesnt get any better than 64 soccer games packed into a single month in your home country. But what if they could watch the final nail-biting matches gorgeously rendered in up to 8 million pixels? Thanks to a partnership between Broadcom, Elemental and Brazils Globosat, they can. Broadcom said today its set to enable satellite operator Globosat to broadcast the World Cup finals live in Ultra HD TV at public viewing spots in the games host country, Brazil. Ultra HD sometimes called 4K TV offers four times the resolution seen in 1080p high-definition systems, but also requires four times the bandwidth to encode, deliver and decode, thanks to the high pixel density of the content. Globosat, a top distributor of sports content (think: the ESPN of Brazil) is tapping Broadcom's high efficiency video codec (HEVC) to decode the live-action Ultra HD broadcast with its BCM7445, a set-top box chip it unveiled at the 2013 Consumer Electronics Show.Broadcom is collaborating with video processing specialist Elemental Technologies, which will provide real-time HEVC video encoding for the World Cup broadcast. The live-action broadcast helps position Globosat and Broadcom at the forefront of TV technology, with the idea that Brazils 18 million Pay-TV subscribers will make the transition from standard definition to high definition, and then from high-def to Ultra HD television. Our HEVC technology alleviates some of the bandwidth burden that 4K, 60-frame-per second transmissions require, said Andreas Melder, director of product marketing in the Broadband Communications Group at Broadcom.It enables Globosat, the premier sports broadcaster, to deliver the latest quality video content to viewers.What better venue to showcase that than the World Cup? As the worlds most-watched sports event,</description>
      </item>
      <item>
         <title>What's Powering Next-Gen Auto Technology?</title>
         <link>https://www.broadcom.com/blog/whats-powering-next-gen-auto-technology</link>
         <guid>https://www.broadcom.com/blog/whats-powering-next-gen-auto-technology</guid>
         <pubDate>July 23, 2012</pubDate>
         <description>When todays drivers think about in-car technology, it tends to be about GPS, satellite radio and back-seat video screens for the kids.The cables, wires and antennas that power these technologies are rarely top of mind. Companies like Broadcom are focused on the underlying technologies that improve the driver experience so that the technology just works easing the minds of car owners and manufacturers alike.Broadcom has been working on such as advancements as under-the-hood Ethernet cabling, which not only enhances the in-car experience but also impacts factors like vehicle weight, gas mileage, maintenance and safety. [caption id=&quot;attachment_3609&quot; align=&quot;alignleft&quot; width=&quot;300&quot;] Google's self-driving car has heightened consumer interest in &quot;autonomous&quot; vehicles.[/caption] Companies like Google, which has been experimenting with self-driving cars, have sparked consumers' imaginations around the concept of the Connected Car.The trend has stoked the desire for the smartphone experience complete with the latest apps, streaming media, search and advanced navigation from the road. Broadcom is actively working with car manufacturers to bring this vision to life.Broadcom has partnered with BMW to integrate the worlds first Ethernet-based 360-degree surround view parking assistance system, on-board diagnostics and infotainment into future models of the X5.But thats just one example. In 2011, Broadcom partnered with BMW, NXP, Freescale, Hyundai Motor Company and Harman International to form the OPEN Alliance Special Interest Group. Together with automotive manufacturers and technology providers, we are working to expand wide-scale adoption of Ethernet-based automotive connectivity through single-pair unshielded networks. Broadcom is also actively involved with the IEEE and standards ratification.This week, Dr.Dirk Rossberg, head of the BMW Group Technology Office, will host a public session on the topic of Consumer Electronics and Smart Cars for the IEEE Santa Clara Valley Consumer Electronics Society in Silicon Valley. Dr.Rossberg will discuss how BMWs electric and electronic (E/E) engineering target architecture builds on</description>
      </item>
      <item>
         <title>Broadcom Enhances Standard Technology to Boost China's Cable Overhaul</title>
         <link>https://www.broadcom.com/blog/broadcom-enhances-standard-technology-to-boost-chinas-cable-ove</link>
         <guid>https://www.broadcom.com/blog/broadcom-enhances-standard-technology-to-boost-chinas-cable-ove</guid>
         <pubDate>April 2, 2012</pubDate>
         <description>When the Chinese government began its Next Generation Broadcast (NGB) initiative to create state-of-the-art networks that converge telecommunication, Internet and television, China's cable operators were faced with major overhauls to existing systems.To take advantage of this new opportunity, cable operators looked to Broadcom for help. By extending proven DOCSIS (Data Over Cable Systems Interface Standard) technology with Ethernet over Coax (EoC), Broadcom has created DOCSIS-based EoC as the ideal platform to address China's unique challenges in launching converged communications. Marrying DOCSIS and EoC DOCSIS and EuroDOCSIS are standards pioneered by Broadcom and others, defining two-way operation over a cable network.Although the technology is a staple in the U.S.and Europe, the infrastructure equipment used to deploy DOCSIS in these regions is not optimized for China's denser populations.Previously, a number of other proprietary and vendor-specific EoC technologies have been widely used in China.These vendor-specific solutions unfortunately do not offer interoperability between equipment vendors, and provide no standardized method of implementing Quality of Service (QoS) for isochronous services such as voicea key piece of the NGB initiative. Most of China's cable subscribers live in multi-tenant buildings - referred to as Multi Dwelling Units or MDUswith as many as 200 potential subscribers per building or cluster.These buildings are typically served by one of several &quot;final 100 meter&quot; technologies, such as fiber, twisted pair, Ethernet or coax, installed in the buildings' risers. Also see: Broadcom at CCBN: The China TV Blitz Begins and Pay-TV in China Reaches New Heights with Broadcom Technology Knowing that operators typically have access to the cable in the risers and prefer to use this cable to deliver service, Broadcom has developed a method that leverages existing EPON (Ethernet passive optical network) or GPON (Gigabit Passive Optical Network) Optical Line Terminal (OLT) equipment with DOCSIS to create DOCSIS-based EoC for a</description>
      </item>
      <item>
         <title>With Broadcom Technology, India's Digital TV Transition is About More than a Better Picture</title>
         <link>https://www.broadcom.com/blog/with-broadcom-technology-indias-digital-tv-transition-is-about-</link>
         <guid>https://www.broadcom.com/blog/with-broadcom-technology-indias-digital-tv-transition-is-about-</guid>
         <pubDate>April 18, 2012</pubDate>
         <description>With more than 94 million analog cable TV households, Indias TV market is ready for a new era.Digital TV not only looks better but also brings cool new services like digital video recording and video-on-demand to users.

The Indian government has a mandate to stage analog shut-offs, similar to what the U.S.did a few years ago.Broadcom's new cable set-top box technology makes this happen at a low price point, opening up new digital cable TV experiences in India.

The new BCM7014 allows Indian operators to quickly transition analog to digital TV programming and services.By pushing the envelope on integration, Broadcom is lowering costs and power for users to experience digital TV.With an energy efficient design, user set-top boxes will see a 65 percent reduction in power consumption, as well as faster boot-up times.

Sometimes, digital transitions introduce frustrating problems, such as slow channel change speeds or higher audio levels for commercials or other programs.To help reduce those sorts of problems, Broadcom has developed a unique fast channel-changing technology.Called FastRTV, the technology accelerates channel switches at speeds of up to five times faster than other deployed solutions.To mitigate the issue of louder audio during commercials, Broadcom uses an automatic volume leveling technology.

Tonse Telecom, a research firm in India, estimates that about 65,000 set-top boxes will need to be deployed daily to meet Indias digitization requirements.Broadcom's platform features high integration and low cost designs to satisfy the growing market for cable TV in India.

And, with quick channel change speed and volume leveling technology, Broadcom solves common problems that occur when migrating from analog to digital TV.

Related:

	Digital TV Goes Global
	Innovation powers HDTV and Pay-TV to reach larger audiences
	Broadcom's Mobile Platform Summit: Seeing is Believing in India

 </description>
      </item>
      <item>
         <title>Trending at CES: LG's Smart TV Integrates Broadcom's 5G WiFi</title>
         <link>https://www.broadcom.com/blog/trending-at-ces-lges-smart-tv-integrates-broadcoms-5g-wifi</link>
         <guid>https://www.broadcom.com/blog/trending-at-ces-lges-smart-tv-integrates-broadcoms-5g-wifi</guid>
         <pubDate>January 7, 2013</pubDate>
         <description>Since the unveiling of Broadcom's first 5G WiFi chip at last years International Consumer Electronics Show, a number of companies have launched products with the 802.11ac technology, including networking gear such as routers, as well as client devices like notebooks and PCs.This summer, we saw 5G WiFi land in smartphones, too, paired with Bluetooth 4.0 and FM radio on a single, integrated chip. The next frontier for the super-speedy Wi-Fi connectivity fittingly revealed this week at the display-fest known by most as CES is the integration of 5G WiFi into smart TVs. Today, Broadcom announced that LG Electronics is the first to integrate Broadcom's 5G WiFi technology into a digital TV, enabling viewers to tap a faster and more reliable wireless connection to deliver content to the big screens in their homes. Sangyeob Lee, LG Electronics Senior Director of TV Product Planning, said that partnering with Broadcom allows us to raise the bar and be the first company to introduce the next generation of Wi-Fi in our Smart TV platforms. Its an important milestone for the wireless home entertainment experience.The explosion of video consumption and the growing number of wireless devices being used are all putting stress on older Wi-Fi technologies, which cant match the speed and heft required to view and share in-demand content.Thats left consumers to experience deteriorated performance, choppy videos and slower load times, especially when streaming content from the cloud, smartphone or tablet to a digital TV. 5G WiFi dramatically improves home wireless range, providing higher-capacity video streaming, the ability to connect multiple devices to the network at the same time and broader coverage with fewer dead spots.It also reduces power consumption by up to 83 percent in mobile devices, so consumers can go longer without having to plug in. By incorporating the BCM43526 chip and</description>
      </item>
      <item>
         <title>Emulex Bumps Up Fibre Channel Performance by 20 Percent, Adds PCIe 3.0 and Advanced Data Integrity | Emulex Labs</title>
         <link>https://www.broadcom.com/blog/emulex-bumps-up-fibre-channel-performance-by-20-percent</link>
         <guid>https://www.broadcom.com/blog/emulex-bumps-up-fibre-channel-performance-by-20-percent</guid>
         <pubDate>September 30, 2012</pubDate>
         <description>Just when you thought Fibre Channel (FC) couldn’t get any better or more reliable, Emulex has announced the LPe16000B series of 16GFC Host Bus Adapters (HBAs). This second generation 16GFC adapter delivers a new PCIe 3.0 bus and a performance boost, making it more than 20 percent faster than any other FC HBA available today. And that’s not all, check out this list of features that are only found on Emulex LPe16000B series adapters: A whopping 1.2 million I/O operations per second (IOPS) on a single port¹ – the LPe16000B raises the FC performance bar yet again to support more virtual machines (VMs), larger and faster database transactions as well as delivering the fastest FC IOPS for connection to solid state disks (SSDs) and flash caching appliances. The LPe16000B features an eight processor core design, all of which can apply processing power to one port delivering exceptionally high IOPS on one port when needed. Click here to see the Demartek Performance Evaluation. PCI Express 3.0 support – The LPe16000B is the only FC adapter to support PCIe 3.0, making it the perfect match to the performance capabilities of new server architectures like the Intel E5-2600 family of servers. PCIe 3.0 provides a faster I/O bus, more PCIe lanes and increased I/O bandwidth. The LPe16000B HBAs are also backward compatible with PCIe 2.0 and 4 and 8GFC infrastructures. End-end-data integrity with T10 Protection Information (T10 PI) offload keeps data safe from silent data corruption – Emulex is pleased to be collaborating on the first end-to-end- T10 PI solution with Oracle and EMC. With T10 PI hardware offload on LPe16000B HBAs, data integrity checks can occur without the 30 percent IOPS performance penalty seen with the firmware-based T10 PI implementations tested by Emulex. Tests showed no IOPS performance difference with T10 PI</description>
      </item>
      <item>
         <title>RoCE goes horizontal! New IBTA specification enables application acceleration throughout the data center</title>
         <link>https://www.broadcom.com/blog/roce-goes-horizontal-new-ibta-specification-enables-application</link>
         <guid>https://www.broadcom.com/blog/roce-goes-horizontal-new-ibta-specification-enables-application</guid>
         <pubDate>September 16, 2014</pubDate>
         <description>Today, Emulex participated with the InfiniBand Trade Association (IBTA) in announcing an enhancement to the RDMA over Converged Ethernet (RoCE) specification, which will be known as “RoCEv2.” The primary enhancement that RoCEv2 adds to the existing RoCE specification is routability, which breaks the boundary of Layer 2 networks and enables enterprises to use RoCE to accelerate applications anywhere in the data center. This innovation enables data centers to expand the value of RoCE to multiple domains and physical locations. RDMA enables more efficient communication by allowing data to be moved directly between memory on two servers without CPU involvement on either of the servers, also called zero copy networking. This processing occurs on the RDMA-capable Network Interface Card (NIC) and bypasses the TCP/IP stack, accelerating the movement of data. This allows the data to be directly delivered to the remote memory on the destination server and reduces the CPU I/O workload on both servers for other processing. While RDMA originated as a feature of InfiniBand networks, efforts to support RDMA on Ethernet have been underway for a number of years. The initial RoCE specification in 2010 brought RDMA to Ethernet, but required lossless networks that limited its applicability. Today, one of the primary uses for RoCE is for high performance data transfers on Windows Server 2012 through Server Message Block (SMB) Direct, as well as being supported in various flavors of Linux. The primary benefits of RoCE are due to the lower latency of offers, better network utilization and lower CPU utilization due to the TCP/IP bypass and hardware offload. RoCE also furthers the concept of cable consolidation due to the convergence of another protocol onto a single wire from the server to the network. However, with the continued growth in virtualization, cloud computing and dispersed big data repositories, the</description>
      </item>
      <item>
         <title>Unleash Your Application Performance with Fibre Channel and Software-defined Storage</title>
         <link>https://www.broadcom.com/blog/unleash-application-performance-fibre-channe-lsoftware-defined</link>
         <guid>https://www.broadcom.com/blog/unleash-application-performance-fibre-channe-lsoftware-defined</guid>
         <pubDate>January 28, 2015</pubDate>
         <description>
	Today, Emulex and DataCore announced that DataCore SANsymphony-V10 will provide support for Emulex Gen 5 (16Gb) Fibre Channel (FC) adapters as targets in SANsymphony servers.  The joint solution is available from DataCore and its channel partners.

	 

	Software-defined storage (SDS) solutions, such as DataCore SANsymphony-V10, turn industry standard servers into full-fledged storage arrays.  Typically, these servers are higher end x86 servers with lots of internal storage, lots of PCI Express (PCIe) slots for added flash storage, or SAS attached local storage.  While these storage arrays are fully functioning and can be used for primary shared storage, the typical use case for SDS and SANsymphony-V10 is to accelerate application performance as an in-line storage resource pool.

	 

	


	 

	SANsymphony-V10 comes complete with a Random Write Accelerator engine, which can dramatically boost performance by caching data into DataCore’s RAM cache and later de-staging to back-end disk. This can improve I/O Operations Per Second (IOPS) by up to 3.6X for solid state disk (SSDs) and 33X for SATA disks.

	 

	


	 

	When coupled with the low latency and high throughput of the Emulex Gen 5 FC HBA, this is a lightning fast solution for accelerating enterprise applications, virtualized servers and virtual desktop infrastructure (VDI) implementations. Emulex is working with DataCore to implement this solution in a number of environments where performance matters.

	 

	For more details on how you can unleash the performance of your enterprise applications with Emulex and DataCore, view the webcast here.
</description>
      </item>
      <item>
         <title>Faster is Better: Advanced 8GFC for ThinkServer</title>
         <link>https://www.broadcom.com/blog/faster-is-better-advanced-8gfc-for-thinkserver</link>
         <guid>https://www.broadcom.com/blog/faster-is-better-advanced-8gfc-for-thinkserver</guid>
         <pubDate>April 24, 2015</pubDate>
         <description>The current Emulex family of 8Gb Fibre Channel (8GFC) adapters has been on the market for close to 9 years. So in IT terms, there’s very, very, very old technology inside the adapter. We introduced the Emulex family of 16Gb (Gen 5) Fibre Channel (16GFC) adapters in 2012, but the high cost of optics has impacted the growth of that product. Much like when 4GFC transitioned to 8GFC, the transition to 16GFC, while occurring, is happening at a somewhat lethargic pace.. There are a lot of customers who are just now moving to 8GFC. However, it’s not seven to ten years ago when it comes to storage technology. We now have solid state disk (SSD) drives, and hybrid arrays that have a combination of SSD and magnetic media handling I/O. If you plug in a seven-year-old PCI Express (PCIe) Gen 2.0 device and pair it with a brand new SSD, you’ll be disappointed with the performance results. So could we have the best of both worlds? Could we take the benefits of the newer 16GFC adapter, but with the cost of an 8GFC adapter? So, Emulex has created the Emulex Advanced-8™ 8GFC . It has the newer 16GFC ASIC, but with the lower cost 8GFC optics. So how does it perform? Surprisingly, you get almost double the standard 8GFC performance in a real world test. I had a recent event where someone put my claims to the test. I’d been telling folks about this capability for a while now, but we hadn’t had anyone take me up on the challenge to see if swapping out an older technology 8GFC adapter with a new 16GFC adapter would make much difference. A Microsoft SQL I/O workload in a production environment saw a 67% increase in I/O operations per second, with a throughput</description>
      </item>
      <item>
         <title>Rajiv Kapur in ET Telecom: &quot;Technology Has the Power to Influence and Shape Every Aspect of Modern Life&quot;</title>
         <link>https://www.broadcom.com/blog/rajiv-kapur-in-et-telecom-technology-has-the-power-to-influence</link>
         <guid>https://www.broadcom.com/blog/rajiv-kapur-in-et-telecom-technology-has-the-power-to-influence</guid>
         <pubDate>February 10, 2014</pubDate>
         <description>Editor's Note: Broadcom experts often weigh in on popular topics on industry sites around the Web. Below is a reprint of a story that appeared in ET Telecom, in which Rajiv Kapur, Senior Director of Business Development at Broadcom, talks about how Broadcom technologies will change everyday life. From ET Telecom It's no secret that today's consumers are hungry for all the jaw-dropping, life-simplifying innovation that technology can deliver. That's why companies and entrepreneurs are scrambling to bring that next &quot;must-have&quot; gadget to market. Behind every great gadget; however, is an equally impressive technology and this is where things get really interestingespecially given that developers continue to fuel the fire of consumers imagination with technologies promising to enable bigger, better, faster, and even sharper resolution devices. What are the technologies behind this year's top gadgets? Let's take a closer look. 4K and Over-the-Top Content Since the advent of digital television, consumers have increasingly demanded sharper screens and a better viewing experience. In the past five years alone, we've seen ever-increasing screen sizes and now a revolution in graphics; all culminating in the realization of Ultra High-Definition (HD) TV and Over-The-Top (OTT) content delivery. The technology making these advances a reality is the recently ratified H.265 or High-Efficiency Video Codec (HEVC) standard. Because HEVC reduces bandwidth usage by 50 percent, it enables the delivery of Ultra HD content on new consumer products such as Sony's dazzling 84-inch Ultra HD TV. Of course Sony is not alone in its use of the HEVC standard. Other major manufacturers like LG, Samsung and Vizio are also capturing the world's attention with multiple Ultra HD TV sizes and price points. And, Ultra HD content is beginning to catch on, spurred by recent announcements from Netflix and Sony. Netflix has begun testing Ultra HD content, otherwise</description>
      </item>
      <item>
         <title>Rajiv Kapur in CXO Today: &quot;Digitization is Enticing Customers to Re-Think How They Watch TV&quot;</title>
         <link>https://www.broadcom.com/blog/rajiv-kapur-in-cxo-today---digitization-is-enticing-customers-to-re-think-how-they-watch-tv-</link>
         <guid>https://www.broadcom.com/blog/rajiv-kapur-in-cxo-today---digitization-is-enticing-customers-to-re-think-how-they-watch-tv-</guid>
         <pubDate>July 8, 2014</pubDate>
         <description>Editors Note: Broadcom experts often weigh in on popular topics on industry sites around the Web. Below is a reprint of a story that appeared in CXO Today, in which Rajiv Kapur, Senior Director of Business Development at Broadcom, talks about how Broadcom is responding to India's television digitization mandate. From CXO Today: Digitization is one area which is going through an exciting transition over the past several years. The trend has got further momentum as the Telecom Regulatory Authority of India (TRAI) mandated digitization in the country, prompting major players to bring cost-effective and sophisticated solutions in the market. In an exclusive interaction with CXOtoday, Rajiv Kapur, Senior Director, Broadcom India explains the state of digitization in the country and how it is driving greater innovation in the industry. What is the current state of digitization in the country? Since the Telecom Regulatory Authority of India (TRAI) mandated digitization, India has largely completed the transition from analogue to digital with 12 million digital set-top boxes (STB) seeded and 80% of consumer application forms processed by TRAI. The final phases of digitization will be implemented in municipalities and throughout the rest of the country by the close of 2014. Indias compulsory digitization has been one of the key factors contributing to the growth of the STB market in India. Digitization of the television industry creates a need for set-top boxes with every digital TV set. The estimated market in India today is well over 100 million sets and growing. This presents a massive opportunity for companies such as Broadcom. Digitization requires a sophisticated head-end to be setup, which includes electronic equipment built on powerful, cost-efficient semiconductors. Additionally, operators now have the opportunity to build two-way networks that will give subscribers new interactive services built on new infrastructure and modems. What</description>
      </item>
      <item>
         <title>Broadcom BroadR-Reach Ethernet Portfolio Brings Autos into Digital Age</title>
         <link/>
         <guid/>
         <pubDate>December 7, 2011</pubDate>
         <description>Consumer interest in driver safety and infotainment features are at an all-time high, but automotive technology has not kept up with consumer expectations.Connectivity is edging its way squarely into the equation.

Collision warnings, comfort controls, infotainment and advanced driver assistance systems are emerging as compelling new automotive applications, increasing the need for bandwidth and connectivity within and between in-vehicle networks.

Today, Broadcom responds by unveiling the next generation in automotive connectivity.The Broadcom BroadR-Reach Ethernet portfolio is the broadest automotive Ethernet product portfolio in the industry, consisting of five devices including three highly integrated switches with embedded PHYs, and two stand-alone PHY solutions. All are designed to meet the rigorous demands of the automotive industry.

In addition, the portfolio is the first to enable 100Mbps over unshielded single twisted pair cabling, to increase performance and substantially reduce connectivity cost and cabling weight. Unlike existing Ethernet solutions that are closed (isolated in end-point applications using either LVDS or 100Base TX Ethernet cable), Broadcom Ethernet technology enables the migration to an open, scalable network.

This announcement follows the recent introduction of the OPEN (One-Pair Ether-Net) Alliance Special Interest Group (SIG).Established to drive wide scale adoption of Ethernet-based automotive connectivity as the standard in automotive connectivity, the SIG will address industry requirements for improving in-vehicle safety, comfort, and infotainment, while significantly reducing network complexity and cabling costs.Members include Broadcom,  NXP Semiconductors N.V., Freescale Semiconductor, Harman International, BMW, Hyundai Motor Company and Jaguar Land Rover. License to specification for BroadR-Reach is available to all interested OPEN Alliance members under RAND terms via a license from Broadcom. Visit www.opensig.org to learn more.

For more information on the Broadcom BroadR-Reach automotive portfolio, visit go.broadcom.com/ or check out the Broadcom demo at the Consumer Electronics Show, January 10-13, 2012.</description>
      </item>
      <item>
         <title>Seen at the MWC Broadcom Booth: &quot;PC-on-a-Stick&quot; Dongle Makes Computing Accessible, Affordable</title>
         <link>https://www.broadcom.com/blog/seen-at-the-mwc-broadcom-booth-pc-on-a-stick-dongle-makes-compu</link>
         <guid>https://www.broadcom.com/blog/seen-at-the-mwc-broadcom-booth-pc-on-a-stick-dongle-makes-compu</guid>
         <pubDate>February 25, 2013</pubDate>
         <description>Here at Mobile World Congress, the latest and greatest technologies for on-the-go computing make up most of the buzz.Still, the humble PC hasn't been eclipsed by sleek, do-all tablets and smartphones.At Broadcom's booth, the PC isn't going away.It's just becoming more compact and more affordable. Today, at the Mobile World Congress show in Barcelona, Broadcom is showing a dongle-like device think of it as a PC on a stick that's about the size of a pack of chewing gum.The device, displayed without a cover at the Broadcom booth, is essentially a printed circuit board that plugs into either a USB port or HDMI port on a larger display, such as a monitor or TV. Combined with cloud technology and connected to any Bluetooth-enabled mouse or keyboard, this dongle has the potential to become the next generation of personal computing.Through Wi-Fi, the device taps into the Internet, where it connects to an operating system and computing applications hosted in the cloud to become, in essence, a personal computer than fits into your pocket. The potential use cases are numerous but Broadcom's Martyn Humphries, Vice President &amp; General Manager, MAP, in the Mobile &amp; Wireless Group, highlights two immediate potential, always-on-the-go user groups: telecommuters and students. For remote workers, the dongle-like device is a surrogate work PC, authorized for access to a company network and files, via the cloud.Because it's significantly cheaper to issue than a standard laptop or desktop, the company even has the option of disconnecting the user when the job is over, instead of trying to chase someone down for returned equipment. Likewise, students can also benefit, Broadcom's Humphries said.Getting broadband into classrooms isn't as much of a challenge as it once was but procuring computers for students is still expensive, as is maintenance and security.Through this model, student-issued</description>
      </item>
      <item>
         <title>From CCBN: Set-Top Box Tech Tailored For China's Growing Cable Market</title>
         <link>https://www.broadcom.com/blog/from-ccbn-set-top-box-tech-tailored-for-chinas-growing-cable-ma</link>
         <guid>https://www.broadcom.com/blog/from-ccbn-set-top-box-tech-tailored-for-chinas-growing-cable-ma</guid>
         <pubDate>March 21, 2013</pubDate>
         <description>BEIJING From the floor of the China Content Broadcasting Network show this week, one trend is clear: Growth is on the horizon for the cable television landscape in China. Todays cable industry in China is expanding at full tilt for a number of reasons.First, a change in administration is expected to bring a boost to the economy and increase capital spending on infrastructure.Second, a sweeping Next-Generation Broadband initiative and a nationwide transition from analog to digital broadcast are currently under way.Finally, momentum is building for C-DOCSIS, or the China Data over Cable Systems Interface Standard, which promises to bring interoperability and quality of service to cable TV and broadband. All of these factors add up to a big upswing in demand for Pay TV services among Chinese consumers and the television set-top boxes that bring those services into Chinese homes, according to Charlie Lou, senior product line manager for cable in the Broadband Communications Group at Broadcom. A potential infusion of capital should have a favorable impact on Chinas Next Generation Broadcast (NGB) initiative to create state-of-the-art networks that converge television, Internet and telecommunication services to support its transition from analog to digital transmission and broadband Internet access, Lou said.In turn, the transition will boost demand for high-definition set-top boxes (STB) and two-way network equipment, which in China are provided by cable service providers to subscribers. Thats the backdrop for this weeks CCBN exhibition, a conference for cable industry professionals that drew Broadcom's best broadband experts to Beijing. Related: Broadcom at CCBN: Beijing Brings Cable to the Forefront At the show, Broadcom is unveiling the BCM7583 and the BCM7584, two cable set-top box chips that offer cable operators in China features that rival those of IPTV and OTT (over-the-top) service providers at an affordable cost.Some features include high-definition broadcast support,</description>
      </item>
      <item>
         <title>Comcast Picks Broadcom for Cloud-based IPTV Set-Tops</title>
         <link>https://www.broadcom.com/techblogs/broadcomblogs/comcast-picks-broadcom-for-cloud-based-iptv-set-tops</link>
         <guid>https://www.broadcom.com/techblogs/broadcomblogs/comcast-picks-broadcom-for-cloud-based-iptv-set-tops</guid>
         <pubDate>January 7, 2013</pubDate>
         <description>Comcast is on the road to an all IP-based set-top box, and Broadcom's helping it get there. Its one of the many cable innovations being delivered through powerful new set-top boxes on display across the convention center floor at the International Consumer Electronics Show in Las Vegas this week. Cable operators like Comcast are hoping to thrill subscribers with new offerings that go beyond the typical &quot;triple play&quot; broadband package. The rise of Internet Protocol TV (IPTV) boxes and cloud-based storage for digital video recorders (DVR) has generated a lot of buzz lately, thanks to the ability to create a truly &quot;on-demand&quot; viewing experience. Features such as search, video-on-demand (VoD), streaming Internet services (such as Netflix and Pandora) and the ability to share content between devices continue to raise the bar for operators and remove barriers between viewers and their favorite TV content. Both service providers and subscribers are looking for solutions that will provide quality content, fast streaming and additional options while staying affordable. Thats where Broadcom and Comcast shine: At CES this week, the companies are talking about the first deployment of Comcasts Device Software Reference Design Kit, which is based on the BCM7125 connected home set-top box system-on-a-chip (SoC). The chipset will be going into Comcasts RNG-150 set-top box, which is set to offer customers a slew of upgrades and options by being the first to fully integrate Internet connectivity with standard digital cable. Broadcom makes all the fancy new features play nicely together in one box by integrating five major set-top box functions into one device, including: a full standards-based DOCSIS cable modem a decoder that ensures the delivery of high-def content to your TV MoCA-powered connectivity for watching recorded content in any room 3-D graphics for a slick user-interface speedy content delivery with 1Ghz tuners</description>
      </item>
      <item>
         <title>Inside the Boxee TV: Broadcom Powers &quot;No Limits DVR&quot;</title>
         <link>https://www.broadcom.com/blog/home-entertainment/inside-the-boxee-tv-broadcom-powers-no-limits-dvr/%09</link>
         <guid>https://www.broadcom.com/blog/home-entertainment/inside-the-boxee-tv-broadcom-powers-no-limits-dvr/%09</guid>
         <pubDate>October 17, 2012</pubDate>
         <description>Editors Note (7/8/13): Boxee is now part of Samsung Electronics Inc.and as of July 10, 2013, will no longer be offering Cloud DVR service to subscribers.For more information, see Boxees announcement on their website. They call it No Limits DVR, a TV service that saves your favorite TV recordings to the cloud, instead of a hard drive, and not only allows you the unlimited space to store them but also the streaming capabilities to watch them from any device or display. Its a feature on the new Boxee TV, a device that brings together broadcast TV channels, digital video recording (DVR) and Internet apps in one sleek little box. A device with a cloud based DVR in addition to streaming apps requires powerful technology. [caption id=&quot;attachment_5102&quot; align=&quot;alignright&quot; width=&quot;300&quot;] Screen shot of Boxee TV's &quot;No Limits DVR,&quot; where TV content is stored and retrieved in the cloud.[/caption] Thats where Broadcom comes in.Broadcom's high definition, IPTV technology does some of the heaviest lifting under the hood of the Boxee TV so the consumer has a seamless multimedia download experience.Its our BCM7231 set-top box (STB) chipset that supports the high performance streaming capabilities at a lower cost for the manufacturer Boxee, in this instance. Broadcom works closely with leading cable, satellite and telecom companies to give their customers just what they have been clamoring formore storage space for more choices on what to watch.Broadcom also adds dual BCM3517 Digital Cable-Ready DTV Receiver chipsets that enable the watch and record to the cloud feature and truly differentiates the Boxee TV from the competition. The dual tuner allows the viewer to record two programs at once, or record one program while watching another. Broadcom's digital receiver chipset is compatible with both North American digital cable television and digital terrestrial broadcast television standards, which allows consumers</description>
      </item>
      <item>
         <title>Broadcom and Rovi Team Up to Slim Down Ultra HD's Big Bandwidth</title>
         <link>https://www.broadcom.com/blog/broadcom-and-rovi-team-up-to-slim-down-ultra-hds-big-bandwidth</link>
         <guid>https://www.broadcom.com/blog/broadcom-and-rovi-team-up-to-slim-down-ultra-hds-big-bandwidth</guid>
         <pubDate>June 10, 2013</pubDate>
         <description>Consumers have proven time and time again that, when it comes to new television technologies, theyll eventually buy in just at their own pace.Case in point: high-definition televisions that were out of reach for most consumers when they hit the scene five or so years ago have finally reached a pricing level thats attractive to the mainstream. Now, the next wave of TV technology an advanced display called Ultra HD is teasing consumers with the beautiful, roughly 4,000 x 2,000 pixel content but still keeping them at arms reach with sky-high sticker prices on the newest sets, which eventually will come down.This time around, however, pricing isnt the only potential barrier to adoption. [caption id=&quot;attachment_7988&quot; align=&quot;alignright&quot; width=&quot;300&quot;] Ultra HD display with Broadcom's video decoder tech on display at the 2013 Consumer Electronics Show.[/caption] The elephant in the room is bandwidth consumption, especially as services that stream content over the Internet continue to gain in popularity.The massive file size of Ultra HD streaming content puts a burden on home broadband networks when you consider that a standard-definition movie gobbles more than 1 gigabyte (GB) of data and a high definition movie at 1,080 pixels eats up to more than 3 GB.With super-sized Ultra HD, those files jump exponentially to 160 GB. Couple those data estimates with the fact that the average household watches almost 9 hours of media a day across all platforms with content streaming gaining momentum and the concerns about bandwidth constraints become more obvious. Those are just the theoretical limits faced by most U.S.broadband users. Although average consumer broadband speeds have been bumped to 7.4 Mbps, some carriers are still looking at so-called speed tiers to mete out data to users, with the most affordable providing enough bandwidth for just one high quality stream. If the typical family</description>
      </item>
      <item>
         <title>Digital TV Goes Global</title>
         <link>https://www.broadcom.com/blog/television-2/digital-tv-goes-global/</link>
         <guid>https://www.broadcom.com/blog/television-2/digital-tv-goes-global/</guid>
         <pubDate>March 19, 2012</pubDate>
         <description>Broadcom is enhancing the global home entertainment experience with another industry first: making next-generation HDTV services available on existing standard networks.

Today, at IPTV World Forum in London, Broadcom announced the BCM3461- the industry's first fully integrated 40nm DVB-T2 receiver.DVB-T is the most widely deployed DTT (Digital Terrestrial Television) system worldwide, adopted in more than 60 countries and found on more than 200 million receivers.The DVB-T2 is a unique receiver because it enables consumers to enjoy next generation TV services such as 1080p high definition movies and games in areas without exiting digital &quot;terrestrial&quot; implementations.Terrestrial networks are an older mode of television broadcasting which does not involve satellite transmission or cables.They are prevalent throughout Europe and also found in Russia and South Africa.

Offering a significantly smaller and lower cost design, Broadcom's DVB-T2 receiver integrates a low-noise amplifier (LNA), tuner and demodulator on a single chip.With this new platform, Broadcom is helping to transition current analog TV systems, driving the deployment of digital terrestrial services in at least 28 countries.

The BCM3461 is a great example of Broadcom's ability to enter into new markets and build value for its customers, leveraging its extensive platform experience.</description>
      </item>
      <item>
         <title>Compression Tech in the Spotlight at IBC as Ultra HD Goes Global</title>
         <link>https://www.broadcom.com/blog/television-2/compression-tech-in-the-spotlight-at-ibc-as-ultra-hd-goes-global/</link>
         <guid>https://www.broadcom.com/blog/television-2/compression-tech-in-the-spotlight-at-ibc-as-ultra-hd-goes-global/</guid>
         <pubDate>September 12, 2014</pubDate>
         <description>When the pixel-dense, larger-than-life content promised by Ultra HD is ready for prime-time, operators want to be ready. Meanwhile, HEVC compression technology is helping them optimize their networks and deliver better-quality video to consumers. For the past few years, Broadcom has been helping cable, IPTV, satellite and terrestrial operators, along with their set-top box partners, get ready with a behind-the-scenes technology that enables delivery of Ultra HD content. For technical types, its a data compression standard called the High Efficiency Video Codec (HEVC), which doubles the data compression ratio, compared to previous standards, without compromising video quality.That means that high-definition, 1080p video can be delivered with about half the bandwidth over home broadband connections. This week, Broadcom is showing off its latest HEVC- enabled offerings for operators from around the globe at the International Broadcasting Conference trade show in Amsterdam. HEVC compresses the video and then sends it to the TV or set-top box, reducing the file size and bandwidth requirements without the need for fatter pipes. The video is then decompressed before being displayed in its full resolution. [cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_right&quot;] The same bandwidth-crunching properties apply to HD streaming as well, which is what enables operators to find more efficiency in their networks and lower the cost of deployment. Ahead of the event, the company announced a slew of releases that span from new partners to a portfolio of eight new single-chip SoCs that bring HEVC to terrestrial broadcasters, enabling them to deliver more HD channels on their existing infrastructures. Heres a news roundup: Korea Telecom Launches Broadcom-enabled Ultra HD IPTV Service Rovi Teams with Broadcom on Advanced, Cloud-Based Entertainment Discovery and Monetization Capabilities for Set-top Boxes Broadcom Announces Next Generation Cisco Security Certification on Four HEVC Ultra HD Connected Home Platforms Broadcom Powers TiVo's New Ultra HD Set-top</description>
      </item>
      <item>
         <title>Interest in In-Car Ethernet Connectivity Grows</title>
         <link>https://www.broadcom.com/blog/automotive-technology-2/interest-in-in-car-ethernet-connectivity-grows/</link>
         <guid>https://www.broadcom.com/blog/automotive-technology-2/interest-in-in-car-ethernet-connectivity-grows/</guid>
         <pubDate>May 10, 2012</pubDate>
         <description>A special interest group established by Broadcom just six months ago to drive wide scale adoption of Ethernet-based automotive connectivity has seen a 7x growth in members, jumping to 45 with the addition of Bosch, Continental, Jaguar-Land Rover and Renesas Electronics.

The OPEN Alliance (One-Pair Ether-Net) SIG was established to drive innovation for the next-generation car, with a specific focus on improving in-vehicle safety, comfort, and infotainment, while significantly reducing network complexity and cabling costs.

Key to the newly established SIG is the proliferation of Broadcom's BroadR-Reach technology as an open standard.BroadR-Reach is designed specifically to address the stringent requirements of the automotive industry while delivering high-performance bandwidth of 100Mbps over an unshielded single twisted pair cable.By eliminating the need for expensive, cumbersome shielded cabling, automotive manufacturers can reduce connectivity costs by up to 80 percent and cabling weight by up to 30 percent.

At its initial meeting earlier this year, the OPEN Alliance established technical committees to address interoperability requirements, third party testing platforms, certification procedures and higher data rate specification requirements.The group members met again this month at the Embedded Systems Expo in Tokyo.

Separately, the members recently voted to appoint Dr.Kirsten Matheus, Ethernet Project Manager at BMW, as Chair of the OPEN Alliance.

Related Reading:

	Broadcom BroadR-Reach Ethernet Portfolio Brings Autos into Digital Age
	In-Car Ethernet Paves the Way for New Features, Increased Efficiency [Video]
	The Case for Ethernet in Cars

 </description>
      </item>
      <item>
         <title>Rajiv Kapur in DNA India: &quot;Technology Promises to Redefine the Very Meaning of Highway Safety&quot;</title>
         <link>https://www.broadcom.com/blog/automotive-technology-2/rajiv-kapur-in-dna-india-technology-promises-to-redefine-the-very-meaning-of-highway-safety/</link>
         <guid>https://www.broadcom.com/blog/automotive-technology-2/rajiv-kapur-in-dna-india-technology-promises-to-redefine-the-very-meaning-of-highway-safety/</guid>
         <pubDate>September 24, 2015</pubDate>
         <description>Editors Note: Broadcom experts often weigh in on popular topics on industry sites around the Web. Below is a reprint of a story that appeared in DNA India, in which Rajiv Kapur, Senior Director of Business Development at Broadcom, talks about how Broadcom's connectivity tech will enable the cars of the future. From DNA India: Today's advanced in-car technologies are ushering in the most dramatic evolutionary leap in automotive history since the Ford Motor Company was founded in 1903: Vehicles that operate without human intervention and a world where car crashes are history. While technology is being blamed for dangerous new driver distractions, it also promises to redefine the very meaning of highway safety. Technology never daydreams, never falls asleep, never reaches for hot coffee, and never tries to text on the freeway. Driver-assist to the Self-Driving car Reducing human error was the auto industrys push toward intelligent driver-assist packages with smart features like self-parking, lane-keeping, warnings of obstacles, and automated acceleration or braking. Everything needed for a driverless car is already on wheelsliterally. The industry is now focused on the software refinements required to navigate bustling city streets based on input from multiple sensors detecting hundreds of objects at onceincluding that stop sign in the hand of a crossing guard. The &quot;Driving&quot; Technologies Requirements for the self-driving car begin with sensors and camera images that provide 360-degree sight, along with intelligent software to analyze sensor/camera input and initiate action. A central computer receives and distributes data throughout the system, sending software commands to the car's electronic controls and actuators. Digital maps and a satellite navigation system are essential for orienting and driving from point A to point B. Down the road, self-driving cars will need to communicate with each other and highway infrastructure such as traffic signals and toll</description>
      </item>
      <item>
         <title>Dish Network Hops into CES with Product Upgrades</title>
         <link>https://www.broadcom.com/blog/ces/dish-network-hops-into-ces-with-product-upgrades/</link>
         <guid>https://www.broadcom.com/blog/ces/dish-network-hops-into-ces-with-product-upgrades/</guid>
         <pubDate>January 9, 2012</pubDate>
         <description>LAS VEGAS - The &quot;new&quot; Dish Network was unveiled at the Consumer Electronics Show this morning, complete with new products, new technology, new features and a couple of active little joeys - yes, as in baby kangaroos. I have to admit that seeing live animals at a CES press conference is a first for this trade show veteran but it was a pretty effective way to promote the new lineup of set-top boxes by name - the Hopper and the Joey. The devices - which create a new whole-home DVR experience - are powered by a 750 MHz Broadcom processor, the fastest satellite receiver processor available today for a quick, responsive on-screen guide. It's one of a series of technologies that Broadcom is delivering to the ecosystem to enable anywhere, any time digital TV. That guide, as well as a number of other upgrades to the user interface, brings new life to the satellite company's offerings by giving the user an enhanced experience that's more reflective of the way people watch television today.Channel surfing gets a whole new meaning as users truly can scroll through quickly - without network delays - to find the programming they want. The Hopper is the main DVR and features three satellite TV tuners, a two-terabyte hard drive and Bluetooth technology for linking to devices such as wireless headphones.The smaller Joey receivers can network into the Hopper so your favorite programs can &quot;hop from room to room so the viewer doesn't have to,&quot; said company President and CEO Joseph Clayton. The new system also concentrates a number of upgrades on delivering enhanced movies - notably through its expanded partnership with Blockbuster - as well as a new music experience through SiriusXM. Customer needs have changed, technology has changed, and so has Dish Network, Clayton said.The</description>
      </item>
      <item>
         <title>3 New Ways to Get a BIG Application Performance Boost Without Spending a Whole Lot of Cash</title>
         <link>https://www.broadcom.com/blog/3-new-ways-big-application-performance-boost-spending-lot-cash</link>
         <guid>https://www.broadcom.com/blog/3-new-ways-big-application-performance-boost-spending-lot-cash</guid>
         <pubDate>August 26, 2013</pubDate>
         <description>One of the great values that comes with using Gen 5 Fibre Channel Host Bus Adapters (HBAs), is that even when installed into an existing 8Gb Fibre Channel (8GFC) infrastructure, you can get a BIG performance bump without spending a load of cash on new infrastructure or making time-consuming configuration changes. Because application performance is front and center of IT data center concerns, and meeting service level agreements (SLAs) are more critical than ever as more mission-critical applications are being virtualized, Emulex has extended Gen 5 FC benefits by adding three new products, providing data center administrators with more options and flexibility to boost performance and meet SLAs. The new products include the LPe16004 quad-port HBA, the LPe15004 quad-port HBA, and the LPe16202 Converged Fabric Adapter. We’ll talk more about the new adapters in moment. Emulex Labs testing has shown significant performance improvements by using Gen 5 Fibre Channel HBAs. In the example below, in order to get a 25%-45% application performance improvement, you could do either of two things: Spend Big on Infrastructure Upgrades (hundreds of thousands of dollars) Or Upgrade to Gen 5 Fibre Channel HBAs (spend a few hundred dollars on upgrading to Gen 5 HBAs) More servers More software licenses More power And associated increased IT management costs… A simple plug-and-play performance upgrade Compatibility with 4GFC/8GFC infrastructures No additional management costs/complexity Same incredible reliability and simplicity of the time-tested FC protocol, but with new features for virtualization, huge IOPS performance and latency improvements, and data integrity protection Now back to the new products…in addition to the already available single-port and dual-port Gen 5 FC HBAs and mezzanine cards that are available from virtually every OEM today, the following three products join the Gen 5 FC line-up: Emulex LPe16004 Gen 5 FC HBA: The LPe16004 is the</description>
      </item>
      <item>
         <title>Faster than the speed of SSD</title>
         <link>https://www.broadcom.com/blog/faster-speed-ssd</link>
         <guid>https://www.broadcom.com/blog/faster-speed-ssd</guid>
         <pubDate>September 15, 2014</pubDate>
         <description>Emulex is pushing the boundaries and continuing to push Fibre Channel to new levels. We just released two new features for download on the 10.2 firmware on the Gen 5 (16Gb) Fibre Channel (FC) Host Bus Adapters (HBAs). The first, ExpressLaneTM, is a host side application Quality of Service (QoS) that we first discussed at SNW Europe last fall. Bringing queue management to critical applications ensures performance and Service Level Agreements are met even in peak network traffic. Enterprises now implementing solid state disk (SSD) or flash technology in their fabric are enjoying the blazing fast performance it offers. However, when the Storage Area Network (SAN)-connected flash traffic gets stuck behind lower-priority traffic, users can experience latency and delays in application performance for mission-critical applications. Emulex ExpressLane™ works by highlighting a Logical Unit Number (LUN) on the SAN that has critical data being sent and creating a priority lane. The host side HBA ensures that the frames intended for that LUN will get out quicker and targets the priority LUN. This means that slower high throughput operations, such as backup, will not slow down mission-critical data flow leaving a flexible network. ExpressLane offers a digital ‘carpool lane’ to enterprise networks to meet today’s problems. The second feature – Brocade ClearLink - adds support for Brocade’s diagnostic (D_port) feature. This Emulex supported feature enables HBAs with Gen 5 FC optics to perform a battery of diagnostic tests with Brocade switches to find errors before they impact communication and pinpoint faulty optics. Made by engineers for engineers, this feature helps save time and keeps the network running smoothly before critical crashes. Finding a single optic or faulty cable can take hours by testing one at a time. Automate the tests and sit back and enjoy a cup of coffee while the network</description>
      </item>
      <item>
         <title>Catch a Rising Star – Nimble Storage and Fibre Channel</title>
         <link>https://www.broadcom.com/company/blog/catch-rising-star-nimble-storage-fibre-channel</link>
         <guid>https://www.broadcom.com/company/blog/catch-rising-star-nimble-storage-fibre-channel</guid>
         <pubDate>November 18, 2014</pubDate>
         <description>Today, Nimble Storage announced its foray into Fibre Channel (FC) storage with support for Gen 5 (16Gb) FC in the Nimble CS-series Adaptive Flash arrays. Wait, what? Why is Nimble, who cut their teeth on iSCSI attached hybrid storage, where they are blowing out huge numbers, moving into a “dying” market? After all, Nimble blew past its own revenue guidance by 89% last quarter, so obviously they are doing something right. The fact is, there are large deals to be had in FC storage and 16GFC is coming on strong, having doubled share in the last quarter to 10% of the total FC market. (Crehan Research, Sept. 2014 report, slide 13) So, maybe the rumors of FC’s death have been exaggerated. Nimble was leaving money on the table. Many of the world’s largest data centers run their mission-critical workloads on FC and will continue to do so due to its reliability, security and scalability. Recently, the Fibre Channel Industry Association (FCIA) launched a campaign discussing FC as a trusted foundation for storage, showcasing such customers as AOL, Rackspace and Symantec. Nimble Storage has been increasing its presence in large enterprises and this offering allows them to provide these customers with the suite of products to meet their needs across mission-critical enterprise applications, disk-based and in-memory databases, virtual/cloud environments and data analytics. The combination of FC with flash-based storage gives customers ideal conditions for a high performance, reliable storage architecture. With this release, Nimble is taking its business to the next level. Emulex is proud to have partnered with Nimble Storage to enable FC support in these arrays. Our many customers can now enjoy one more enterprise-class array, which supports Emulex performance leading Gen 5 FC technology end-to-end. We look forward to catching the Nimble rising star and riding with them.</description>
      </item>
      <item>
         <title>In Television, the Future is Now</title>
         <link>https://www.broadcom.com/blog/in-television-the-future-is-now</link>
         <guid>https://www.broadcom.com/blog/in-television-the-future-is-now</guid>
         <pubDate>July 13, 2012</pubDate>
         <description>A new level of digital convenience and sophistication has come home. TV today is on the go throughout your home, while you travel, even in your pocket.Users can record a program in one room and watch it another.Sports fans can catch a live baseball game on a smartphone.Jet-setters can watch their favorite TV show on the plane.How is this happening? [caption id=&quot;attachment_3463&quot; align=&quot;alignleft&quot; width=&quot;300&quot;] TV today is on the go.[/caption] Broadcom is enabling zippy broadband speeds and wireless connectivity, which are paving the way for a new world of entertainment.These technologies are coming together to create a new reality, feeding a change of habit and expectations for modern-day TV viewers. This phenomenon is transforming the very nature of home entertainment, making the primary TV just one of several screens to view content, and the home just one place to watch it.Where home entertainment was once firmly defined by the family television, today a wherever, whenever mentality drives audience expectations well beyond the living room. Thank You, Internet Internet-connected devices and the many technologies enabling them have fueled explosive growth in video views.This amounts to a monumental shift in how people consume TV and Internet content, driven by technologies and standards created and nurtured by industry giants like Broadcom.The ability to share broadcast and Web content securely via multiple screens is a direct result of this ongoing innovation. In-home TVs and PCs or laptops remain the most popular devices for watching video content.But consumer habits are subtly shifting, according to a recent survey from research consulting firm Frank N.Magid Associates.More than half (56 percent) of people with online access say they watch video on a mobile phone at least once a month, the survey showed.Some 28 percent say they watch video on a mobile phone daily. Analytics firm Ooyala, in its</description>
      </item>
      <item>
         <title>Broadcom, Rovi Open Doors for Enhanced Entertainment at IBC [Video]</title>
         <link>https://www.broadcom.com/blog/broadcom-rovi-open-doors-for-enhanced-entertainment-at-ibc-video</link>
         <guid>https://www.broadcom.com/blog/broadcom-rovi-open-doors-for-enhanced-entertainment-at-ibc-video</guid>
         <pubDate>September 5, 2012</pubDate>
         <description>Call it TV with a new twist. When Blu-Ray players hit the scene a few years ago, they not only introduced viewers to a new vibrant, high-definition picture quality but also a lineup of new capabilities such as the ability to download a favorite movie or listen to an online radio station created on a laptop. Today, the technology has spread its wings, going beyond Blu-Ray players and moving into many of the set-top boxes that consumers already have connected to their TVs.As such, viewers are quickly becoming accustomed to the instant delivery of some of these Internet-powered services. Read Rovi's press release about the certification here. Broadcom, of course, has been instrumental in powering some of these technological breakthroughs.Today, we're happy to announce that our IP set-top box platforms has achieved the DivX Plus Streaming Certification, a milestone that solidifies our platform for the DivX Plus Streaming technology.That technology provides instant, easy access to enhanced streaming content on the go, making movie watching better with 1080p high def picture, multilingual subtitles, smooth fast-forward and rewind, instant playback across devices and the ability to download and playback when offline.For service providers, DivX Plus Streaming is a differentiated feature that can be offered to customers. Broadcom powers this high-performance viewing experience with its easy-to-implement certified DivX Plus Streaming BCM7241 IP set-top box platform. The certification and associated technology are being touted this week at IBC 2012 in Amsterdam, an important industry gathering for European cable, IPTV and satellite markets.In fact, the streaming service is ripe for expansion in Europe, where it is offered by electronics retail giant Media Markt. Watch the video to learn more about Rovis DivX Plus Streaming offering: Related: IPTV Revolution in Your Living Room: Broadcom at IBC Amsterdam Broadcom's Full-Band Capture: Digital Tuning That Enables Much More</description>
      </item>
      <item>
         <title>Dr. Ali Abaye in John Day's Automotive Electronics: Trusted Ethernet Secures the Connected Car</title>
         <link>https://www.broadcom.com/blog/dr-ali-abaye-in-john-days-automotive-electronics-trusted-ethern</link>
         <guid>https://www.broadcom.com/blog/dr-ali-abaye-in-john-days-automotive-electronics-trusted-ethern</guid>
         <pubDate>October 15, 2013</pubDate>
         <description>Editor's Note: Broadcom experts often weigh in on popular topics on industry sites around the Web. Below is a reprint of a story that appeared in John Day's Automotive Electronics, in which Dr. Ali Abaye, Senior Director of Product Marketing at Broadcom, talks about Ethernet in the connected car. From John Day's Automotive Electronics: Over the past decade, the volume of electronic components in automotive has increased at a dramatic rate. In fact, analysts predict that by 2025, 100 percent of vehicles will be connected. With the advent of autonomous vehicles, the evolution of car connectivity has gained sharp momentum. In a recent J.D. Power Study, 82 percent of drivers surveyed expressed an interest in connecting their smartphone to the vehicle infotainment system. Unfortunately, todays array of in-vehicle technologies fall short of the advanced networking capabilities needed for a truly connected car. Thats why developers have been clamoring for a faster, scalable, flexible, cost-effective networking protocol. Most importantly, they want a solution that can offer fail-safe protections against malfunctions and malicious cyber-attacks from would-be hackers. The King of secure connectivity takes to the road The global standard of Ethernetfor decades, the worlds most popular and reliable networking technologyhas a long history of successful and secure deployment in dynamic, ever-changing, plug-and-play technology environments. Ethernets proven security features have an added advantage in automotive applications: The devices and configurations of in-car networks are known and predictable, so identifying and protecting against threats can be a finely tuned process. Fully optimized for in-vehicle applications and capable of delivering bandwidth of up to 100 Mbps, todays automotive Ethernet solutions run over light, inexpensive wiring that slashes connectivity costs up to 80 percent and reduces cabling weight up to 30 percent. Automotive Ethernet switch networks rely on point-to-point communication, using bandwidth far more efficiently than</description>
      </item>
      <item>
         <title>Ali Abaye in Yahoo Voices: &quot;The Connected Car Will Transform into a Digital Living Room on Wheels&quot;</title>
         <link>https://www.broadcom.com/blog/ali-abaye-in-yahoo-voices---the-connected-car-will-transform-into-a-digital-living-room-on-wheels-</link>
         <guid>https://www.broadcom.com/blog/ali-abaye-in-yahoo-voices---the-connected-car-will-transform-into-a-digital-living-room-on-wheels-</guid>
         <pubDate>March 21, 2014</pubDate>
         <description>Editor's Note: Broadcom experts often weigh in on popular topics on industry sites around the Web. Below is a reprint of a story that appeared in Yahoo Voices, in which Dr. Ali Abaye, Senior Director of Product Marketing, Infrastrucutre and Networking Group at Broadcom, talks about the future of connected cars. From Yahoo Voices When mobile phones first appeared, few could have foreseen the revolutionary lifestyle changes this small device would catalyze. The evolutionary leap to smartphones has turned mobile devices into tiny computers that provided a connection point for nearly everything we do. Now, the barrage of technological advancements that transformed the telecommunications industry is paving the way for equally seismic shifts in automotive design. Bluetooth Opens the Door The car you drove just a few years ago moved you from point A to point B, with your radio or mobile phone providing the only connection to the outside world. Hands-free communication arrived with the advent of Bluetooth technology-already proven to be a life-saving feature. Today, low-power Bluetooth Smart technology is spawning explosive growth in wireless applications, including wearable technology that captures and transmits biometric data. Bringing wearable smart devices into a car makes it possible for a vehicle to warn a driver who is falling asleep or act preemptively to avoid other potential dangers based on critical indicators like glucose levels, blood pressure or blood alcohol levels. Wi-Fi Expands the Possibilities Another essential component of the connected car is Wi-Fi technology, which makes it possible for a vehicle to sync and connect without cellular service. Wi-Fi allows manufacturers to push software upgrades and new features directly to the vehicle. Drivers can use their smart mobile device and Wi-Fi apps to remotely unlock their car, activate the climate control system, find their vehicle in a crowded parking lot, and</description>
      </item>
      <item>
         <title>Broadcom's Pinpoint Navigation Gets an Automotive Upgrade at CES 2016</title>
         <link>https://www.broadcom.com/blog/automotive-technology-2/broadcoms-pinpoint-navigation-gets-an-automotive-upgrade-at-ces-2016/</link>
         <guid>https://www.broadcom.com/blog/automotive-technology-2/broadcoms-pinpoint-navigation-gets-an-automotive-upgrade-at-ces-2016/</guid>
         <pubDate>January 6, 2016</pubDate>
         <description>Broadcom is taking its advanced Global Navigation Satellite System technology on the road at the 2016 Consumer Electronics Show with a new automotive-grade chip that adds tri-band reception for all visible satellite constellations. The BCM89774, announced today ahead of the tech industrys biggest trade show, is an automotive GNSS chip that simultaneously supports multiple satellite groups, including Galileo (E.U.), GLONASS (Russia), SBAS (U.S., Europe, Japan, India), QZSS (Japan) and BeiDou (China). Not only does it aim to improve location accuracy and positioning especially in the urban canyons of crowded cities but it also paves the way for satellite tracking features in the car that go beyond standard dash navigation systems. Broadcom's offering is the only one on the market that does this while surpassing stringent AEC-Q100 requirements for automotive applications, said Richard Barrett, director of Automotive Wireless Connectivity at Broadcom. The big differentiation is that we are the only product sampling with full-band capture radios that enable reception from all three satellite radio bands, he said. Car Talk Automotive tech has become huge feature at CES in the past decade and the trend doesnt seem to be slowing any time soon. Nine major automakers and more than 100 auto tech companies are on hand at this years show, representing a 25 percent increase over last year, said Consumer Technology Association Chief Executive Gary Shapiro. [cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_right&quot;] &quot;Companies at CES 2016 will unveil a complete immersive infotainment experience for the car with advances in active window displays, accident notifications and nav systems,&quot; he told reporters last month at a preview event. Underscoring the importance of the connected car to the consumer electronics industry, General Motors Chief Executive Officer Mary Barra is giving the keynote address at CES this week on the topic of redefining personal mobility. Satellite Ready What makes</description>
      </item>
      <item>
         <title>Broadcom Combo Chip Gets a 2012 CES Innovations Design and Engineering Award</title>
         <link>https://www.broadcom.com/blog/broadcom-combo-chip-gets-a-2012-ces-innovations-design-and-engi</link>
         <guid>https://www.broadcom.com/blog/broadcom-combo-chip-gets-a-2012-ces-innovations-design-and-engi</guid>
         <pubDate>November 10, 2011</pubDate>
         <description>2012 CES Design and Engineering Award Recognizes Chip that Drives Innovation in Smart Phones and Tablets Broadcom's BCM4330 combo chip is named a 2012 CES Innovations Design and Engineering Award Honoree Wireless connectivity solution, which provides latest Bluetooth and Wi-Fi features, is being adopted by top-tier phone and tablet makers to support more media and data applications Award exemplifies Broadcom's leadership in wireless connectivity for consumer electronic devices Broadcom Corporation has been named a 2012 CES Innovations Design and Engineering Awards Honoree in the Embedded Technologies product category for its InConcert BCM4330 wireless connectivity combo chip.The announcement was made at the 2012 CES New York Press Preview in New York City. The highly integrated, third generation combo chip supports today's most innovative and compelling mobile applications. The BCM4330 was selected for its combination of low power, small size and advanced wireless functionality, allowing mobile device makers to deliver new, engaging mobile experiences. Robert Rango, executive vice president and general manager of Broadcom's Mobile and Wireless Group, said Broadcom is &quot;proud to be selected by the Consumer Electronics Association for this prestigious award, highlighting our commitment to driving innovations that bring the newest, most advanced features like Wi-Fi Direct and Bluetooth low energy to next generation CE devices.The BCM4330 represents the pinnacle of wireless semiconductor integration, with its benefits now available in some of the most popular mobile devices on the market.&quot; The BCM4330 also supports Wi-Fi Direct and Bluetooth High Speed that enable mobile devices to communicate directly with each other at high speed without having to connect first to an access point, supporting multiple wireless device-to-device applications and usage models. In addition, the Broadcom BCM4330 was the industry's first combo chip solution certified with the Bluetooth 4.0 standard, which enables Bluetooth Low Energy applications.This makes the BCM4330 the ideal</description>
      </item>
      <item>
         <title>Broadcom Powers Huawei's Small Cell Rollout</title>
         <link>https://www.broadcom.com/blog/broadcom-powers-huaweis-small-cell-rollout</link>
         <guid>https://www.broadcom.com/blog/broadcom-powers-huaweis-small-cell-rollout</guid>
         <pubDate>October 30, 2012</pubDate>
         <description>Small cells are big business for telecom service providers looking to better serve their data-hungry customers.

Yet as data use skyrockets, so does demand on the networks.Information and communication technology companies are challenged to find ways to deliver faster and more efficient cellular connectivity with their existing 3G networks.

The answer may lie in small cells, a relatively new market for Broadcom that helps operators get the most out of their networks and delivers seamless connectivity to heavy data users.

Think of small cells as mini base stations.They are similar to Wi-Fi access points, but run over a licensed spectrum that can be deployed quickly and inexpensively inside an office building, someones home or at a crowded public space.They boost network signals, thereby improving 3G network coverage and capacity.Users don't notice them and only see the benefits:  faster data rates and higher voice quality.

Cellular service providers are increasingly looking to small cells and Wi-Fi to help their customers get better coverage indoors and in crowded outdoor hot spots.Sales of small cells are expected to hit $2 billion by 2016, according to an Infonetics study this year.Thats why Huawei, a one of Chinas top telecommunications companies, tapped Broadcom's Small Cell Baseband Processor family for its small cell access point deployments.The announcement comes out of the Small Cell Global Congress this week in Berlin.
Read the press release.

Broadcom small cell technology is being incorporated into Huaweis ePicoxx product line.The technologies pave the way for improved 3G cellular network performance, helping indoor and outdoor hotspots deliver speedy connections that use less power and, in turn, cost less money.

 Related:

	Tomorrows Mobile Network Delivered Today
	Broadcom Fuels the Affordable Smartphone Revolution
</description>
      </item>
      <item>
         <title>Closing out CES with a Broadcom Buzz for 2012</title>
         <link>https://www.broadcom.com/blog/ces/closing-out-ces-with-a-broadcom-buzz-for-2012/</link>
         <guid>https://www.broadcom.com/blog/ces/closing-out-ces-with-a-broadcom-buzz-for-2012/</guid>
         <pubDate>January 14, 2012</pubDate>
         <description>All good things must come to an end - and it's safe to say that Broadcom at CES this year was a good thing.



The news announcements were compelling.The booth traffic was hearty.The interest in Broadcom technology was widespread.And the Blog Squad worked to bring it all together, in one place, so that those who weren't in Las Vegas this past week could have a near-CES experience.And now, it's over - until next year, of course.

Between now and then, expect to hear more about Broadcom's initiatives around connectivity.We're certainly not done talking about in-car Ethernet and all that it will be able to deliver.We're looking forward to seeing how our partners push the limits of growth on Android-powered television.We'll continue to work with companies in global markets to bring advanced technology to smartphones, set-top boxes and smart TVs.

[caption id=&quot;attachment_943&quot; align=&quot;alignright&quot; width=&quot;300&quot;] From left to right, Willy Wong, Sam Diaz, Eric Lin and Prashant Mantha, the Broadcom Blog Squad for CES 2012.[/caption]

And let's not forget about 5G WiFi, a technology that the industry is already excited about and one that we'll be hearing more about in 2012.

As we close out CES, special thanks goes out to the Blog Squad - pictured here - and especially all the folks who worked tirelessly behind-the-scenes in Las Vegas and back home to make sure that our news made it to this blog in a timely, attractive and responsible format.

We enjoyed sharing CES with you.We hope you enjoyed following CES with us.

 

 

 </description>
      </item>
      <item>
         <title>Broadcom and Samsung Bring Google's Android Experience to the TV</title>
         <link>https://www.broadcom.com/blog/broadcom-and-samsung-bring-googles-android-experience-to-the-tv</link>
         <guid>https://www.broadcom.com/blog/broadcom-and-samsung-bring-googles-android-experience-to-the-tv</guid>
         <pubDate>January 8, 2013</pubDate>
         <description>The momentum behind the growth of Googles Android ecosystem is showing no signs of a slowdown.Androids online store, counts some 700,000 apps and games in its portfolio and last year added movies, TV shows, music, books, news and magazines to the lineup.In September, Googles Eric Schmidt said there are already more than a half-billion Android devices on the market and that more than one million new devices are activated daily. Now, Google is taking it to the next level.The Android ecosystem is heading to smart TVs in Korea, via specially-branded set-top boxes for Korea Telecom.Consumer electronics titan Samsung is betting big on all of the Android love by bringing it to digital TVs in Korea with the Samsung SMT-E5015 SmarTV box. With a little help from Broadcom's BCM7356 satellite set-top box system-on-a-chip (SoC), Samsung is enabling the apps consumers love in Google Plays Android Marketfrom Angry Birds to Instagram to bring a familiar entertainment experience to the digital living room. Broadcom and Samsung achieved a special certification by Google thats set to enable service providers to offer subscribers access to new Android-based applications, including Google Play Store, Play Video, Play Music and Search on their TVs. Its a logical expansion for Google and analysts recognize that, with millions of consumers already using Android-powered smartphones and tablets, theres a potential appeal of an Android set-top box, especially as a companion to other mobile devices. These boxes can be complimentary to your smartphone, said Michael Inouye, TV and video analyst at ABI Research.The future of connected CE will ultimately work together with mobile devices and not against them.Other CE devices like connected TVs and game consoles are already integrating mobile devices into the user experience, the same will likely prove true for smart set-top boxes as well.If the Android boxes gain popularity,</description>
      </item>
      <item>
         <title>Keynote Highlights State of the CE Industry, Kicks off CES</title>
         <link>https://www.broadcom.com/blog/keynote-highlights-state-of-the-ce-industry-kicks-off-ces</link>
         <guid>https://www.broadcom.com/blog/keynote-highlights-state-of-the-ce-industry-kicks-off-ces</guid>
         <pubDate>January 8, 2013</pubDate>
         <description>The official International Consumer Electronics Show may have unofficially started yesterday but the show doesnt really begin until Gary Shapiros smiling face is broadcast to thousands of people seated in a ballroom at the Venetian Hotel. Shapiro, the public face and CEO of the Consumer Electronics Association (which puts on the annual tech trade show), kicks off the event each year with his state of the industry address a chance for him to talk about current trends and technologies. But this is no dry PowerPoint presentation.Shapiro pumped up the crowd of thousands with a grand entrance to LMFAO's Party Rock Anthem and Rick Ross' Hustlin a nod to the show's dual purpose: a place for enthusiasts to get excited about tech and a playing field for navigating business pitches and making deals. Shapiro was proud to tout one new business development that parlays into the bigger trend of how consumers want to share their content on screens of all types. He invited executives from big Hollywood movie studios to the stage to talk up their new partnerships with consumer electronics companies (including LG Electronics, Panasonic, Samsung, Vizio, Toshiba, Philips and Sony) on a forthcoming home disc-to-digital cloud conversion service that will allow consumers to stream movies they already own on disc. It falls under a theme thats being illustrated across the show the idea that consumers want to watch the content they want when they want and where they want and they dont want to have to worry about where that content actually lives, so long as they can access it from a number of connected devices. Before handing over the stage to Panasonic for a presentation that talked up technologies such as the Connected Car, among others, Shapiro went on a bit of rant about another top-of-mind concern for</description>
      </item>
      <item>
         <title>Boxee's Latest Set-Top Creation on Display at CES</title>
         <link>https://www.broadcom.com/blog/ces/boxees-latest-set-top-creation-on-display-at-ces/</link>
         <guid>https://www.broadcom.com/blog/ces/boxees-latest-set-top-creation-on-display-at-ces/</guid>
         <pubDate>January 10, 2013</pubDate>
         <description>The 2010 debut of the Boxee Box garnered a lot of attention for the scrappy Israeli startup with the same name, drawing some appeal for its hefty tech chops, as well as its eye-catching industrial design. [caption id=&quot;attachment_6767&quot; align=&quot;alignright&quot; width=&quot;300&quot;] Boxee TV is cloud-based DVR experience that offers playback on mobile devices.[/caption] This year, Boxee TV is back with a new, sleeker design and a $99 price tag that's sure to catch the attention of price-conscious consumers looking for some extra freedom when it comes to watching television.Certainly, the concept of what Boxee offers a cloud-based DVR experience that offers playback on mobile devices is intriguing.But at the Broadcom booth at the International Consumer Electronics Show, the buzz is about the technology that's driving this new television experience. The guts of the Boxee Box with its super-star offering of unlimited DVR storage in the cloud as well as a grab bag of Internet apps and OTT content are chock full of Broadcom connectivity. Boxee Chief Operations Officer and cofounder Tom Sella talked up the BCM7231, a set-top box combo chip that includes Wi-Fi connectivity, tuners and more.&quot;This box is, other than memory, all Broadcom,&quot; Sella said.&quot;It's all-in-one, a Broadcom solution.&quot; The Boxee TV, as the newest device is called, can store your TV recordings with ability to stream online video and access playback even when offline.It includes the usual lineup of connected TV apps like Pandora, Spotify, Netflix, YouTube and Vimeo. [caption id=&quot;attachment_6765&quot; align=&quot;alignright&quot; width=&quot;300&quot;] The device includes the usual lineup of connected TV apps, including Pandora, Netflix and YouTube.[/caption] Broadcom's high definition, IPTV technology does some of the heaviest lifting under the hood of the Boxee TV so the consumer has a seamless multimedia download experience.Likewise, Boxee TV ensures that smartphones, tablets, computers and TVs all are able to</description>
      </item>
      <item>
         <title>From CES 2013: Dish Hopper DVR Powered by Broadcom [Video]</title>
         <link>https://www.broadcom.com/blog/ces/from-ces-2013-dish-hopper-dvr-powered-by-broadcom-video/</link>
         <guid>https://www.broadcom.com/blog/ces/from-ces-2013-dish-hopper-dvr-powered-by-broadcom-video/</guid>
         <pubDate>January 11, 2013</pubDate>
         <description>[caption id=&quot;attachment_6892&quot; align=&quot;alignright&quot; width=&quot;210&quot;] Broadcom Senior Public Relations Manager Dana Brzozkiewicz cuddles a joey at Dish's unveiling of the Hopper at CES 2012.[/caption] Broadcom partner Dish Network made a splash last year when it trotted out a real, live joey (that's a baby kangaroo, for the non-Aussies) to introduce the Hopper, a digital video recorder that lets consumers watch TV in any room of the house. Cute, furry marsupials are a tough act to follow, but Dish is making a go of it again with it's newest Hopper DVR.The biggest addition this year is that Dish was able to integrate Slingbox's &quot;fling&quot; functionality, which enables consumers to watch both live and recorded video content anywhere, anytime, on any device: PC, tablet and smartphone. It works by &quot;encoding and place-shifting live and recorded signals&quot; to the operating system of your choice, according to PC World's review. The new Hopper also boasts a meaty 2TB of storage space (which the company said translates to about 500 hours of high-def content or 2,000 hours of standard-def), built in Wi-Fi and at the heart of it the speedy BCM7425 a Broadcom gateway system-on-a-chip that doubles the video bandwidth, beefs up security and lowers power consumption.The chip features dual HD decoding and dual transcoding support for streaming simultaneous video broadcast content wirelessly to multiple devices. The Hopper's &quot;advanced features come to life with a state-of-the-art 750 MHz Broadcom processor, including Wi-Fi and Bluetooth,&quot; said Vivek Khemka , vice president of product management at Dish Network.&quot;All of these features make it the smoothest and the fastest user interface experience.&quot; In the video clip below, Sling Vice President of Product Development Paddy Rao, talks with the Blog Squad's Peter Zhao about Broadcom's tech in the new Dish Hopper. Not heading to Vegas? Get the latest CES news</description>
      </item>
      <item>
         <title>Storage with Intense Network Growth and the Rise of RoCE</title>
         <link>https://www.broadcom.com/blog/storage-intense-network-growth-rise-roce</link>
         <guid>https://www.broadcom.com/blog/storage-intense-network-growth-rise-roce</guid>
         <pubDate>February 4, 2015</pubDate>
         <description>Last month, the Entertainment Storage Alliances (www.entertainmentstorage.org) held the 14th annual Storage Visions conference in Las Vegas, highlighting advances in storage technologies utilized in consumer electronics, the media and entertainment industries. The theme of Storage Visions 2015 was Storage with Intense Network Growth (SWING), which was very appropriate given the explosions going on in both data storage and networking. While the primary focus of Storage Visions is storage technologies, this year’s theme acknowledges the corollary between storage growth and network growth. Therefore, among the many sessions offered on increased capacity and higher performance, the storage networking session was specifically designed to educate the audience on advances in network technology – “Speed is the Need: High Performance Data Center Fabrics to Speed Networking.” More pressure is being put on the data center network from a variety sources, including continued growth in enterprise application transactions, new sources of data (aka, big data) and the growth in streaming video and emergence of 4K video. According to Cisco, global IP data center traffic will grow 23% annually to 8.7 zettabytes by 2018. Three quarters of this traffic will be inter-data center, or traffic between servers (East-West) or between servers and storage (North-South). Given this, data centers need to factor in technologies designed to optimize data center traffic. Global Data Center IP Traffic Forecast, Cisco Global Cloud Index, 2013-2018 Global Data Center Traffic By Destination, Cisco Global Cloud Index, 2013-2018 Storage administrators have always placed emphasis on two important metrics, I/O operations per second (IOPS) and throughput, to measure the ability of the network to server storage devices. Lately, a third metric, latency, has become equally important. When balanced with the IOPS and throughput, low latency technologies can bring dramatic benefits to storage. At this year’s Storage Visions conference, I was asked to sit on</description>
      </item>
      <item>
         <title>ESG Labs conducts hands-on testing of Emulex 16G Fibre Channel HBAs, and why you should care</title>
         <link>https://www.broadcom.com/blog/esg-labs-conducts-testing-16g-fibre-channel-hbas</link>
         <guid>https://www.broadcom.com/blog/esg-labs-conducts-testing-16g-fibre-channel-hbas</guid>
         <pubDate>February 12, 2012</pubDate>
         <description>Enterprise Strategy Group (ESG) just posted a new report which documents ESG Lab’s hands-on testing of Emulex LightPulse ® 16G Fibre Channel (16GFC) Host Bus Adapters (HBAs), and explores the HBA’s ability to improve virtualization efficiency and increase performance in an 8Gb or 16Gb environment. The report also covers the ease of management and simplicity of deployment of the LPe16000 series. According to ESG research, Fibre Channel is still the primary storage technology used to support virtualized server environments. When asked to name the factors preventing organizations from using server virtualization more pervasively, two of the top three responses were lack of budget and performance concerns. A Fibre Channel HBA that could enhance performance while reducing latency without having to rip and replace existing SANs would be a compelling proposition. Here are a few of the highlights of the report: ESG Lab was able to deploy and manage a LPe16002 HBA into an existing SAN environment side by side with multiple generations of Emulex HBAs using Emulex OneCommand™ Manager for a single consistent point of management. ESG Lab confirmed that Emulex has developed a robust and very full-featured vCenter plug-in that provides all the functionality of OneCommand Manager, including the distribution of mass firmware updates directly from the vCenter console. The LPe16000 series of HBAs run the same drivers as previous-generation Emulex adapters, simplifying management and maintenance. The performance of the LPe16002 HBA tested by ESG Lab was particularly impressive, driving more than five times the OLTP IOPS of its 8GFC predecessor. Even more impressive was the fact that this increase in performance came with a 50% decrease in latency. ESG Lab’s report on the new Emulex 16GFC adapters echoes the high-performance, reliability and management functionality that makes our adapters the clear choice for the toughest virtualized, cloud and mission</description>
      </item>
      <item>
         <title>UFP: New virtual networking technology for System x and Flex System</title>
         <link>https://www.broadcom.com/blog/ufp-new-virtual-networking-technology-for-system-x-and-flex</link>
         <guid>https://www.broadcom.com/blog/ufp-new-virtual-networking-technology-for-system-x-and-flex</guid>
         <pubDate>January 29, 2015</pubDate>
         <description>I’m Tom Boucher and I’m part of the Emulex engineering team that covers the Emulex and Lenovo relationship. My role is a systems engineer, so that makes me the one nerd on the team that works with the Lenovo sales engineers and also with any Lenovo partner systems engineer types in understanding how Emulex technology works with Lenovo technology to make cool stuff. I am more of a fan of “explain cool stuff” than “ramble on about features,” so I decided to tackle a complex topic: the Unified Fabric Port (UFP) feature of Lenovo networking switches. First, if you’d like the technical details, you should read the product guide (previously known as IBM Redbooks) (http://www.redbooks.ibm.com/abstracts/sg248223.html?Open). It’s a great document that will tell you all the things we can do with the Flex System switches (like the EN4093R or the CN4093) or the 8264 rack switch for System x servers. Now, before I give you a quick primer on UFP, I should tell you how we got to UFP and why we like it so much. Back in 2009, the BladeCenter team and what is now Lenovo Networking, created a technology called virtual NIC (vNIC). It was designed to allow for multiple independent networks on top of a single 10Gb Ethernet (10GbE) link. It was wildly popular, but due to the way it was designed, it did vNIC very well by creating a virtual L2 switch using 802.1QinQ (or Q-in-Q) virtual LANs (VLANs) tagging that used to be used for metropolitan area networks or inside hosting centers. It was initially put on the BladeCenter Virtual Fabric switch and each virtual switch needed an uplink to the outside world if you wanted to do anything more than moving Ethernet packets within the BladeCenter chassis. Now, all Lenovo networking switches support this feature</description>
      </item>
      <item>
         <title>Rich Nelson in Multi-Channel News: &quot;2015 is Shaping Up to be an Exciting Milestone in Truly Immersive TV&quot;</title>
         <link>https://www.broadcom.com/blog/rich-nelson-in-multi-channel-news---2015-is-shaping-up-to-be-an-exciting-milestone-in-truly-immersive-tv-</link>
         <guid>https://www.broadcom.com/blog/rich-nelson-in-multi-channel-news---2015-is-shaping-up-to-be-an-exciting-milestone-in-truly-immersive-tv-</guid>
         <pubDate>July 28, 2015</pubDate>
         <description>Editor's Note: Broadcom experts often weigh in on popular topics on industry sites around the Web. Below is a reprint of a story that appeared in Multi-Channel News, in which Rich Nelson, Senior Vice President of Marketing, Broadband and Connectivity Group at Broadcom, talks about the rise of Ultra HD TV in 2015. From Multi-Channel News: All signs indicate that the latter half of 2015 will usher in a number of service provider deployments of Ultra HD (4K) TV. This shouldn't come as a surprise to anyone: With the promise of a dramatically immersive TV viewing experience, consumers have been purchasing 4K TVs at an accelerating clip. The average price of an Ultra HD set has dropped at a dramatic pace since they first debuted in 2013 falling 95% or more in some cases. Ultra HD-enabled set-top boxes are also becoming broadly available. Ultra HD content and service providers are gearing up to meet increased demand for 4K programming options. Ultra HD TV Sales on the Rise Falling costs are a big factor in consumer adoption of Ultra HD technology. Demand for 4K TVs is soaring worldwide as entry-level price points drop well below $1000, model availability expands, and consumers seek out the next best technology as they upgrade aging flat-panel TVs. For example, the 80+ inch Ultra HD TVs announced at the Consumer Electronics Show in January 2013 were priced at more than $20,000. Today, Ultra HD TVs from widely available brands such as Samsung and Sony start at just $799; Sharp and Vizio sets start at just $599. That said, the capability to display crystal-clear Ultra HD is just one piece of the 4K ecosystem. At the heart of the technology are the enabling coding standards that allow service providers to move bit-rate-intensive Ultra HD TV through legacy</description>
      </item>
      <item>
         <title>Broadcom at CCBN: Beijing Brings Cable to the Forefront</title>
         <link>https://www.broadcom.com/blog/emerging-markets/broadcom-at-ccbn-beijing-brings-cable-to-the-forefront/</link>
         <guid>https://www.broadcom.com/blog/emerging-markets/broadcom-at-ccbn-beijing-brings-cable-to-the-forefront/</guid>
         <pubDate>March 20, 2013</pubDate>
         <description>The growth of China's middle class which has already surpassed 300 million people is being met with initiatives to update the country's cable TV infrastructure.The most up-to-the-minute technologies will give consumers the opportunity to experience new digital features and services that come with stronger broadband connections. This week, Broadcom will be showcasing its innovative technologies for the cable industry at the China Content Broadcasting Network Exhibition (CCBN) in Beijing, where companies ranging from service providers to embedded chip makers will be scoping out the future of cable in the worlds most populous nation.CCBN is expected to draw more than 1,000 exhibitors and some 90,000 attendees from more than 30 countries. As cable and broadband offerings take shape in China, Broadcom has been investing in technologies that will help Chinese consumers access the latest options around digital content and other Internet-based services. At the Consumer Electronics Show in January, Broadcom announced a new Gigabit Passive Optical Networking (GPON) system-on-a-chip, technology that accelerates the transition to fiber-to-the-home efforts throughout China.Prior to that, at the International Coverage and Transmission Conference (ICTC) in November, Broadcom talked up the efforts to help China with its digital TV conversion and massive Next-Generation Broadband initiative, specifically the development of technology that would work with the China-DOCSIS standard. This standard helps cable providers in China provide both Internet and cable over the same wiring, saving on infrastructure costs and speeding up adoption.The support for C-DOCSIS is set to accelerate the rollout of Next-Generation Broadband a plan that could reach 315 million Pay-TV households by 2017. Related: C-DOCSIS Greenlighted, Ushers Next-Gen Broadband to China Rapid broadband adoption is set to quickly usher in all types of new Pay-TV offerings, including Internet protocol television (IPTV) that will allow broadcasters to transmit into every Chinese home over secure, high speed connections.</description>
      </item>
      <item>
         <title>CES Opening Day Brings Flurry of News from Broadcom and Partners</title>
         <link>https://www.broadcom.com/blog/ces/ces-opening-day-brings-flurry-of-news-from-broadcom-and-partners/</link>
         <guid>https://www.broadcom.com/blog/ces/ces-opening-day-brings-flurry-of-news-from-broadcom-and-partners/</guid>
         <pubDate>January 10, 2012</pubDate>
         <description>[caption id=&quot;attachment_561&quot; align=&quot;alignleft&quot; width=&quot;300&quot;] Demos at the Broadcom booth, photo by Willy Wong[/caption] The Consumer Electronics Show is underway and Broadcom - which is powering a suite of products across many different electronics categories - has released official news headlines tied to its partner's launches. In all, a half-dozen news releases were issued this morning, joining several other CES-related announcements that were unveiled last week.The announcements making headlines today are: LG Electronics has adopted Broadcom's Bluetooth products for multiple new TVs and accompanying remote controls.Increasingly, Bluetooth is being adopted by consumer electronics makers as an alternative to traditional infrared technology for connecting peripherals to their products.With Bluetooth, TV user experiences can be enhanced with things like gestural remotes for casual gaming and voice recognition for content search and selection.More importantly, the bandwidth is greater with Bluetooth and the interference with other radio technologies is less. Broadcom and Comcast announced that Broadcom's connected-home set-top box platforms will support the Comcast Device Software Reference Design Kit (RDK) and tru2way on-screen guides.The RDK is a pre-integrated software bundle that powers tru2way, IP or Hybrid Set-Top Boxes. Through Broadcom technology, developers can use Comcast's RDK to create rich, multi-screen TV home entertainment experiences. Broadcom announced a robust Wi-Fi Display software stack that allows users of smartphones, tablets and laptop computers to play games, video and thousands of apps on a larger HDTV screen by transmitting the mobile screen images over a wireless connection. Broadcom and Hisense, China's leading TV manufacturer, announced a joint initiative to design and develop Wi-Fi-enabled devices for the Chinese market, including Hisense's first Smart TV.With the technology, consumers will be able to transfer data or videos between their TVs and other mobile devices, stream services from the Internet, and download entertainment, business or personal applications. Broadcom and Qualcomm Atheros endorsed</description>
      </item>
      <item>
         <title>Near-Field Communications: Not Just for Payments  [Video]</title>
         <link>https://www.broadcom.com/blog/ces/near-field-communications-not-just-for-payments-video/</link>
         <guid>https://www.broadcom.com/blog/ces/near-field-communications-not-just-for-payments-video/</guid>
         <pubDate>January 11, 2012</pubDate>
         <description>When you hear the term NFC, whats the first thing that comes to mind? No, I'm not talking about the National Football Conference, but rather Near Field Communications.Think Google Wallet. NFC has the potential to be the future of simplified connectivity--not just in your wallet, but your living room as well. NFC complements other wireless technologies such as Bluetooth and Wi-Fi, offering a quick and easy ways to initiate a wireless connection.Rather than digging into menus and manually entering pairing keys and passwords to connect devices, all you have to do is place two NFC-enabled devices within proximity of each other and &quot;tap&quot; to open a connection. Imagine this scenario: You are relaxing at home watching a movie on your NFC-enabled smartphone.You want a more immersive audio experience, so you grab your NFC-enabled Bluetooth headphones and tap it with your phone. The NFC pairing triggers a Bluetooth connection and your movies audio is automatically diverted to your headphones.You decide you want a more sizable visual experience, so you grab your NFC-enabled HDTV remote and tap it with your phone. A Wi-Fi Display connection is triggered and your movie seamlessly streams in HD from your smartphone to your HDTV.Your smartphone is now free to perform other tasks such as texting or checking e-mail, all while simultaneously streaming your movie to the television. Leaving the room? Just tap your smartphone to the remote again and the video jumps back to your device. Broadcom Blog Squad member Prashant Mantha interviewed Broadcom's Ron Wong, associate product line director for Bluetooth in the Mobile &amp; Wireless Group, who demonstrates how NFC--coupled with a Wi-Fi connected display and a Bluetooth-enabled remote, headset and gaming controller--brings easy pairing to the entertainment system. Broadcom announced its first 40nm NFC chips last September.NFC can already be found in smartphones</description>
      </item>
      <item>
         <title>Broadcom's Michael Hurlston on CES Panel: &quot;Six Wireless Technologies You'll Want to Know&quot;</title>
         <link>https://www.broadcom.com/blog/broadcom-s-michael-hurlston-on-ces-panel---six-wireless-technologies-you-ll-want-to-know-</link>
         <guid>https://www.broadcom.com/blog/broadcom-s-michael-hurlston-on-ces-panel---six-wireless-technologies-you-ll-want-to-know-</guid>
         <pubDate>January 9, 2013</pubDate>
         <description>Things got seriously geeky during one of the hundreds of specialized breakout sessions at the International Consumer Electronics Show. Certainly, everyone here at the show wants to know what the next big thing will be.So its no surprise that a panel called Six Wireless Technology Youll Want to Know would attract a standing-room-only audience. [caption id=&quot;attachment_6758&quot; align=&quot;alignright&quot; width=&quot;300&quot;] Tech experts appear on a CES panel addressing the Six Wireless Technologies Youll Want to Know.[/caption] Among those talking shop and debating the future of wireless tech was Broadcom's own Michael Hurlston, senior vice president and general manager of wireless connectivity combos in the Mobile &amp; Wireless Group. Terms like LTE Advanced, 802.11p and mesh networks were bandied about with ease by members of the panel while some 100 tech reporters, industry analysts and trade show-goers jostled for a spot in the room.Hurlston touched on several of Broadcom's biggest initiatives on a panel with executives from T-Mobile, Sprint, wireless network tester Octoscope and even another wireless chip company (and Broadcom competitor) Qualcomm. Hurlston, together with these industry peers, dropped seriously relevant knowledge about wireless device proliferation, Hotspot 2.0 standards, connected cars, small cell base stations, and pondered the ways in which the sharing, reusing and parceling could address the challenge of the wireless spectrum crunch. The theme of the day: All wireless spectrum is precious and all wireless networking technologies are needed the challenge is getting them to work together seamlessly. Some key topics debated by Hurlston and the rest of the wireless industry crew at the panel: [caption id=&quot;attachment_6762&quot; align=&quot;alignright&quot; width=&quot;200&quot;] Broadcom's Michael Hurlston.[/caption] Is Hotspot 2.0 a viable option for offloading data traffic from overloaded cellular networks? Hotspot 2.0 is a new certification standard to help mobile devices connect seamlessly to wireless networks.Its just one tool in a box full</description>
      </item>
      <item>
         <title>Beyond Triple Play: Innovation on Display at Broadband World Forum</title>
         <link>https://www.broadcom.com/blog/beyond-triple-play-innovation-on-display-at-broadband-world-for</link>
         <guid>https://www.broadcom.com/blog/beyond-triple-play-innovation-on-display-at-broadband-world-for</guid>
         <pubDate>October 16, 2012</pubDate>
         <description>Youve undoubtedly heard of triple play the digital phone, TV and Internet package that telecom, cable and satellite companies have been pushing to consumers for the past decade.By offering an affordable bundle to consumers, the adoption rate of broadband has been growing around the world one third of all households now have Internet. As a result of that growth, two things happened: customers started expecting richer connectivity experiences and competitors started to differentiate their services. For a company like Broadcom, the effect has been a demand for innovative new technologies that will allow service providers to continuously up their game. The bandwidth war keeps service providers looking for new innovation in terms of what they can offer, and that really drives us to continue to develop new products, said Gregory Fischer, VP &amp; GM of Broadcom's Broadband Carrier Access group.It also creates turnover in their networks. Carriers turn to us to make their offerings faster, more reliable and different from their competition.Broadcom enables companies such as AT&amp;T in North America or Belgacom in Europe to improve their services on both sides of the curb in the back-end infrastructure that powers the technology, as well as the in-home experience.From adding 5G WiFi to residential gateways, to deploying new advances in Digital Subscriber Lines (DSL) for improved customer service, Broadcom's technology is powering new ways to triple play, such as sharing content on multiple devices and home automation. This week, at Broadband World Forum in Amsterdam, the industrys biggest tradeshow, Broadcom is showing innovations in DSL, passive optical networking (PON) and Powerline communications. Broadcom's helping telecom carriers improve the connection between their infrastructure and their customers homes by upgrading the existing DSL infrastructure through a technique called vectoring and bonding of phone lines.Vectoring cancels out whats called crosstalk from other phone lines</description>
      </item>
      <item>
         <title>A Miracast-Enabled Future: Next-Gen &quot;Screencasting&quot; Brings Your Content to Life [Video Demo]</title>
         <link>https://www.broadcom.com/blog/a-miracast-enabled-future-next-gen-screencasting-brings-your-co</link>
         <guid>https://www.broadcom.com/blog/a-miracast-enabled-future-next-gen-screencasting-brings-your-co</guid>
         <pubDate>June 3, 2013</pubDate>
         <description>Today, we stare into the screens of many smart devices mobile phones, tablets and laptop computers.Now, the biggest screen in our lives, the living room TV set, is seeing some upgrades that will give it a spot in a lineup of smart devices. Through a concept known as Wireless Display Mirroring, or screencasting, consumers will soon be realizing the benefits that come with throwing the content from a small handheld screen to a big, center-of-the-living-room display.The Wi-Fi-based technology that makes this happen called Miracast has the potential to change the way we think about the TV as a place for gaming, movies and more. [caption id=&quot;attachment_9354&quot; align=&quot;alignright&quot; width=&quot;288&quot;] Download and Share: Learn more about how Broadcom powers next-gen screencasting with Miracast and other wireless tech.Click to expand the infographic.[/caption] Thats because Miracast, an open technology standard supported by the Wi-Fi Alliance, is about more than just transmitting videos from your smartphone to the TV.Sure, TVs are getting smarter and more connected but mobile devices are where all of the good user-generated stuff resides.Beyond video, those devices are where we store our music collections, our favorite social games, our photo albums and more. And while there are ways for people to deliver that content to a big-screen HDTV set in the living room, few of those options are inexpensive, wireless and foolproof.Thats where Miracast comes in. Miracast allows certified devices to send mirror images of their screens directly to other displays.There are already dozens of devices on the market that support Miracast technology, either via built-in software (as in Samsungs Galaxy S3 and Galaxy S4, and many other Android 4.2 powered smartphones), or by using a small external device, called a dongle, that plugs into a computers or TVs HDMI ports (Rocketfishs video receiver is one example).But thats just the beginning.IHS</description>
      </item>
      <item>
         <title>Pay TV Goes Global: Broadcom Takes on Brazil</title>
         <link>https://www.broadcom.com/blog/pay-tv-goes-global-broadcom-takes-on-brazil</link>
         <guid>https://www.broadcom.com/blog/pay-tv-goes-global-broadcom-takes-on-brazil</guid>
         <pubDate>July 22, 2012</pubDate>
         <description>The television experience in Brazil is on the verge of a major overhaul and not just from the analog-to-digital switchover thats coming down the pipeline in the next few years.Brazil, considered the largest economy in Latin America, has become the new hot spot for pay-TV services, outpacing Russia, China and India and gaining the attention of companies from Samsung to Netflix in the process. Expect Brazil to be a darling at ABTA, the top broadcasting trade show in Latin America, when it comes to Sao Paulo later this month.The show will feature technologies that are focused on a new television experience for a region that has been experiencing growth in pay-TV subscriptions of more than 30 percent, as well as 20 percent jumps in broadband service subscriptions. But also expect to see technologies that address the challenges that come along with that sort of attention.The demand on existing infrastructure and devices is sure to lead to some growing pains as Pay-TV providers jockey for position in the market. Broadcom, which continues to develop technologies to address these types of pain points, will be in Sao Paulo to showcase our customized technologies designed to ease the transition from analog to digital TV in Latin America.A full suite of key home networking standards and technologies, including MoCA, Wi-Fi, HomePlug AV and Full-Band Capture help pay-tv operators deliver in-demand services and compelling content to the masses. Researchers with Parks Associates call the growth in Brazil phenomenal and said the country is seeing the emergence of new broadband homes that are able to receive these products and services for the first time.And while the growth rates in Brazil are enough to get excited about, its the potential growth rates for the region that have companies flocking to the region. In Brazil today, so-called terrestrialor</description>
      </item>
      <item>
         <title>Broadcom's Joe Del Rio at NAB: &quot;Consumer TV Behavior is Shifting&quot; [VIDEO]</title>
         <link>https://www.broadcom.com/blog/television-2/broadcoms-joe-del-rio-at-nab-consumer-tv-behavior-is-shifting-video/%09</link>
         <guid>https://www.broadcom.com/blog/television-2/broadcoms-joe-del-rio-at-nab-consumer-tv-behavior-is-shifting-video/%09</guid>
         <pubDate>April 16, 2014</pubDate>
         <description>The way consumers interact with their TV content is constantly evolving.

Joseph Del Rio, associate product line director in the Broadband Communications Group at Broadcom, earlier this month sat down with Beet TV at the National Associate of Broadcaster's yearly trade show in Las Vegas for an interview that covered the &quot;then&quot; and &quot;now&quot; of how consumers engage with their television content.

He gave a nod to the on-screen channel guide, personal video recording and the ability to &quot;sling&quot; or cast content to other devices (think: smartphone, tablet) around the house.

Were talking about a world where people will watch what they want, when they want, where they want,&quot; Del Rio told Beet TV.&quot;It's a fundamental change from how content is delivered.Were seeing a transition from fire hydrant delivery of content to IP content.

He addressed what's next in live broadcast: Ultra HD content.

Broadcom is working behind the scenes on the leading compression standards to make sure the bandwidth-hogging, pixel-rich Ultra HD content gets to consumers in real-time.

Del Rio said he expects Ultra HD content to become increasingly available to viewers in the second half of 2014, while major events such as the World Cup may be showcased in Ultra HD via live broadcast.

[cf-shortcode plugin=&quot;generic&quot; field=&quot;javascript&quot;]</description>
      </item>
      <item>
         <title>Broadcom's Automotive Ethernet: Ready for the Factory Floor</title>
         <link>https://www.broadcom.com/blog/broadcoms-automotive-ethernet-ready-for-the-factory-floor</link>
         <guid>https://www.broadcom.com/blog/broadcoms-automotive-ethernet-ready-for-the-factory-floor</guid>
         <pubDate>December 11, 2012</pubDate>
         <description>By now, consumers are familiar with how connectivity is bringing the driving experience to a whole new level.They have come to expect that a road trip will include things like turn-by-turn directions, backseat entertainment for the kids and perhaps a little help with parallel parking. Automakers are tasked with bringing a high-tech experience to drivers while keeping costs down and added weight to a minimum.Thats where Broadcom's BroadR-Reach technology comes in: its Ethernet-based connectivity for the car that enables all of these systems, while delivering speeds of 100Mb per second and reducing cabling costs and weight. BroadR-Reach Ethernet technology as an auto connectivity standard is catching on with manufacturers.For automakers, it helps cut production costs and reduces design complexity by providing a centralized network.For consumers, BroadR-Reach technology enables the advanced safety and infotainment features drivers and passengers have come to expect . Broadcom has made significant headway into the connected car market since entering just one year ago.Today marks a few big milestones that are set to pave the way for Broadcom's Ethernet-enabled connected car technologies to roll out on production lines all over the world. The OPEN (One Pair Ether-Net) Alliance Special Industry Group, which started a year ago, is now 100-plus members strong.Included in its ranks are global automakers BMW, Hyundai, Daimler, GM and Nissan, along with and top-tier manufacturers of driver assistance, safety and infotainment systems such as Harman and Bosch. BroadR-Reach technology is now fully certified to meet the rigorous demands of the global auto industry earning several certifications that ensure it is of zero-defect quality and in compliance with key standards. A collaboration with Valeo that has achieved production part approval for installation. [caption id=&quot;attachment_659&quot; align=&quot;alignright&quot; width=&quot;300&quot;]&lt;img class=&quot;size-medium wp-image-659&quot; src=&quot;/wp-content/uploads/2012/01/Willy_EthernetCar_cabling-300x200.jpg&quot; alt=&quot;Unshielded twisted pair Ethernet cabling (right) vs.traditional LVDS cabling (left).&quot; width=&quot;300&quot; height=&quot;200&quot; /&gt; Unshielded twisted</description>
      </item>
      <item>
         <title>Broadcom's Auto Tech in the Spotlight</title>
         <link>https://www.broadcom.com/blog/broadcoms-auto-tech-in-the-spotlight</link>
         <guid>https://www.broadcom.com/blog/broadcoms-auto-tech-in-the-spotlight</guid>
         <pubDate>September 18, 2012</pubDate>
         <description>Perhaps the biggest trend in auto innovation in recent years has been the phenomenon of the connected car.Today's tech-savvy consumer has an insatiable appetite for the latest and greatest connected devices, a trend that's motivating automakers to integrate the hottest apps and functionality into their cars.

 Applications like infotainment, warning systems for object detection, sensors, &quot;smart steering&quot; and more are being made more cost-effective and accessible with technologies like Ethernet.

Ethernet's no longer relegated to the data center--it's attracting attention from car manufacturers like BMW and recognized as a scalable, cos-effective technology that promises interoperability and easy add-ons for new service offerings.

Broadcom had a visit from Orange County Business Journal tech reporter Chris Cassachia, who got a preview of our pioneering Broad-R Reach automotive Ethernet cabling system.Our engineers gave Chris an in-depth look at Broadcom's technology, and he wrote about our founding partnership with BMW.
Read the OC Business Journal Story.(PDF)
Automotive connectivity is on the rise: The automotive semiconductor market is expected to grow to $29 billion next year, according to Strategy Analytics Automotive Semiconductor Demand Forecast 2012 report.
Broadcom's BroadR-Reach Ethernet technologies made a splash at last year's Consumer Electronics Show, when we announced our entrance into the market and partnerships with key industry players.
It's in the spotlight again ahead of next month's Society of Automotive Engineers trade show in Detroit, where we'll be talking about our role in the expanding market.

To learn more about Broadcom's automotive technologies, follow us on Twitter and connect with us on Facebook.

Related:

	 Whats Powering Next-Gen Auto Technology?
	Car Connectivity: How Technology Will Change the Driving Experience
	The Case for Ethernet in Cars
	Interest in In-Car Connectivity Grows
	Broadcom Survey: Consumers Want In-Car Connectivity
</description>
      </item>
      <item>
         <title>Winning Connectivity: Broadcom Honored with 2012 CES Innovations Design and Engineering Award</title>
         <link>https://www.broadcom.com/blog/winning-connectivity-broadcom-honored-with-2012-ces-innovations</link>
         <guid>https://www.broadcom.com/blog/winning-connectivity-broadcom-honored-with-2012-ces-innovations</guid>
         <pubDate>January 9, 2012</pubDate>
         <description>Without the ability to connect to data networks and other devices, the smartphone would be a lot less smart.

Broadcom helps make smartphones smarter by combining key wireless connectivity technologies together into &quot;combo chips&quot; that integrate them on a single piece of silicon. Combo chips help smartphone manufacturers slim down their designs and make sleeker phones.

Broadcom's InConcert BCM4330 wireless connectivity solution, with its Wi-Fi, Bluetooth and FM radio components, has been recognized with a CES Innovations Award - an honor bestowed upon the most compelling electronics products of the year, as determined by respected technology journalists and a panel of independent engineers.

The BCM4330 was selected for its combination of low power, small size and advanced wireless functionality that allows mobile device makers to enable new, engaging mobile experiences that extend beyond the handset.

Come hear Broadcom's David Recker, product marketing director for the Mobile and Wireless Group, talk about the award-winning chip at the Innovation Awards Design &amp; Engineering Showcase at International Consumer Electronics Show this week.

When: Tuesday, January 10, 2012

Time: 3:30 p.m.

Place: The Venetian Las Vegas Hotel, Casino and Resort, Venetian Ballroom

Learn more about the 2012 CES Innovations Awards and read Broadcom's press release announcing the award.</description>
      </item>
      <item>
         <title>Home Theater of the Future: Ultra HD Gets Real at CES</title>
         <link>https://www.broadcom.com/blog/home-theater-of-the-future-ultra-hd-gets-real-at-ces</link>
         <guid>https://www.broadcom.com/blog/home-theater-of-the-future-ultra-hd-gets-real-at-ces</guid>
         <pubDate>January 8, 2013</pubDate>
         <description>By now youve likely heard that the buzz around 4KTV, or Ultra HD, has reached a deafening roar during the first day of this years International Consumer Electronics Show. Hundreds of thousands of show-goers are flocking to exhibitors booths today to watch television not for the programming, but rather for the picture quality.Ultra HD TV (or Ultra High Definition) is the insiders lingo for an upcoming display technology the Consumer Electronics Association has defined as delivering a display resolution of at least 8 megapixels ranging from 3840 x 2160 pixels to more than 4,000 x 3,000. The picture is a bit more complex because theres still no single Ultra HD standard and there are myriad types of content, none of it widely released. Still, early adoption is expected to kick off this year, and Broadcom and other embedded tech companies are getting ready with their supporting casts of products for Ultra HD TV makers, including brand-spanking-new codecs, broadband chipsets and accessories. Samsung made headlines in Vegas yesterday when it unveiled to a cadre of tech reporters a floating, 85-inch Ultra HD TV, dubbed the S9 UHD.The beast of a display threatened to outshine others made by LG Electronics, Sony, Vizio and Toshiba, also on the show floor here at the Las Vegas Convention Center. Although Ultra HD is nabbing headlines, Broadcom's working behind the scenes with cable and satellite operators to make sure that every pixel of all that glorious 4k TV content when it finally rolls out to consumers can actually be enjoyed in their homes. Broadcom today announced the BCM7445, a video decoder system-on-a-chip thats set to reside in consumers primary media gateway to support the delivery of Ultra HD content into the multi-screen connected home. With monster display sizes also comes monster-sized video data content, which threatens</description>
      </item>
      <item>
         <title>Just in Time for CES: Broadcom and Intel Team Up to Drive Wireless Display Adoption</title>
         <link>https://www.broadcom.com/blog/just-in-time-for-ces-broadcom-and-intel-team-up-to-drive-wirele</link>
         <guid>https://www.broadcom.com/blog/just-in-time-for-ces-broadcom-and-intel-team-up-to-drive-wirele</guid>
         <pubDate>January 3, 2013</pubDate>
         <description>The volume of video being consumed over the Internet is growing at an exponential rate, representing about half of all global Internet traffic today and expected to reach 93 percent by 2015.At the same time, the number of devices consumers are using to watch video is also on the rise.Researchers estimate that approximately 4.8 devices are in an average U.S.household with a home network nearly double from just four years ago. The challenge for consumers is how to share their content between devices.Thats where technologies such as Intel Wireless Display (Intel WiDi) come into play. Broadcom today became the first Wi-Fi silicon vendor with a license for Intel WiDi technology in PCs. As part of this agreement, Broadcom will integrate Intel WiDi software onto its WLAN chips to help drive adoption of the technology in Ultrabook systems. The multistream 2x2 Wi-Fi data rates in Broadcom's chip coupled with Intel WiDi software will deliver a seamless, high-quality experience to users. Wi-Fi Display, Wi-Fi CERTIFIED Miracast and Intel WiDi are based on the same underlying technologies that allow you to do one very useful thing: easily stream content between two devices wirelessly.Those streams will become more commonplace this year as new standards for Wi-Fi-enabled devices eliminate interoperability and compatibility issues.Intel WiDi and industry standard Miracast are compatible technologies that improve the consumer experience for sharing video content between devices. Intel and its ecosystem partners have shipped more than 30 million Intel WiDi-capable notebooks.This agreement will help drive the proliferation of the technology across a much broader offering of notebook PCs.Learn more about Intel WiDi, or visit the Intel booth (Central Hall, Booth No.7252) at the International Consumer Electronics Show, which is this week at the Las Vegas Convention Center. Not heading to Vegas? Get the latest CES news from Broadcom and our</description>
      </item>
      <item>
         <title>IPTV Revolution in Your Living Room: Broadcom at IBC Amsterdam</title>
         <link>https://www.broadcom.com/blog/iptv-revolution-in-your-living-room-broadcom-at-ibc-amsterdam</link>
         <guid>https://www.broadcom.com/blog/iptv-revolution-in-your-living-room-broadcom-at-ibc-amsterdam</guid>
         <pubDate>September 4, 2012</pubDate>
         <description>The digital revolution is heading for the living room and at the center of it all are the devices that change the way people not only watch, but also interact, with their televisions. This week, the top broadcast content providers and broadband technology makers are showcasing the best of the digital TV revolution at the International Broadcasting Convention in Amsterdam. Count on seeing Broadcom's innovative technologies across the show floor, in devices that many of our partners and customers will have under the spotlight. Be sure to catch a demo of YouView, a new searchable TV guide service for UK viewers.YouView offers catch-up TV--the ability to watch any show aired in the last 7 days with traditional with digital video recording (DVR), Video-on-Demand (VoD) and live content all in one easy-to-use, searchable guide. Or, check out the Abox42 M12 IP Set Top Box that delivers new options for multiscreen viewing like Over-the-Top (OTT) content. TV Evolves with Broadcom Overwhelmingly, people are engaging with other screens while TV is on a laptop, tablet or mobile phone.Infonetics estimates a whopping 83 percent of viewers are engaging in multiscreen viewing but only 39 percent of service providers today offer support for streaming content to those secondary screens. Those that arent in the game yet soon will be.Infonetics suggests that 67 percent of the service providers should deploy multiscreen services by the end of next year and that more than 80 percent will be on board by 2014. Broadcom is driving innovation around this next phase of the digital TV transformation and has developed the technologies suited to everything from mobile devices and set-top boxes to wireless upgrades and back-end infrastructure. The experience of television itself has been a constant evolution from black-and-white to cable to DVR.For the first time, though, television is escaping</description>
      </item>
      <item>
         <title>Ultra HD Heads to the Living Room with Broadcom's Video Technology</title>
         <link>https://www.broadcom.com/blog/home-entertainment/ultra-hd-heads-to-the-living-room-with-broadcoms-video-technology/</link>
         <guid>https://www.broadcom.com/blog/home-entertainment/ultra-hd-heads-to-the-living-room-with-broadcoms-video-technology/</guid>
         <pubDate>March 20, 2013</pubDate>
         <description>The arrival of Ultra HD (Ultra High Definition) television, which took center stage at Januarys Consumer Electronics Show, is poised to be the next big thing in home entertainment.With display resolutions of about 4,000 x 2,000 pixels on screens that reach some 84 inches, the larger-than-life viewing experience is one that consumers are already excited about. But before people can start enjoying all that those screens have to offer, the video content distributors the IPTV, cable and satellite TV operators first must figure out how to deliver Ultra HD video content without compromising its quality. Thats where Broadcom becomes a key player.In January, Broadcom unveiled the BCM7445, a video decoder system-on-a-chip targeted to reside in digital media gateway devices found in multi-screen, connected homes.With a demo today at the TV Connect trade show in London, Broadcom is helping to push consumer adoption of Ultra HD by integrating the latest MPEG H.265 High Efficiency Video Codec technology into the device. Related: Antix Labs and Broadcom Collaborate On HD Game Service [caption id=&quot;attachment_7988&quot; align=&quot;alignright&quot; width=&quot;300&quot;] Ultra HD display with Broadcom's video decoder tech on display at the Consumer Electronics Show.[/caption] Broadcom is demonstrating its Ultra HD video decoder technology at the TV Connect trade show for the IPTV industry, which is March 19-21 at the Olympia Exhibition Centre in London. Technologies like this only come around once in a decade or so and offer huge benefits to anyone concerned about the cost of bandwidth or interested in offering premium Ultra HD services to their subscribers in a cost effective way,&quot; said Aidan ORourke, Senior Marketing Director for IP Set Top Box, in the Broadband Communications Group at Broadcom.&quot;We plan to continue to drive this technology into a broad range of set-top box chips in the near future. With monster display sizes comes</description>
      </item>
      <item>
         <title>The Case for Ethernet in Cars</title>
         <link>https://www.broadcom.com/blog/automotive-technology-2/the-case-for-ethernet-in-cars/</link>
         <guid>https://www.broadcom.com/blog/automotive-technology-2/the-case-for-ethernet-in-cars/</guid>
         <pubDate>January 13, 2012</pubDate>
         <description>With an abundance of in-car infotainment systems such as Toyota Entune, Ford Sync, and MINI Connected most complete with a complementary smartphone app its clear that the automobile is the next big thing to be connected. Consumers want more connectivity in the car and Broadcom is helping make that happen. [caption id=&quot;attachment_655&quot; align=&quot;alignright&quot; width=&quot;300&quot;] Advanced Driver Assistance System (ADAS) demo with Ethernet.Photo by Eric Lin.[/caption] Broadcom's BroadR-Reach automotive portfolio enables the next-generation of connected cars to achieve 100 Mbps Ethernet over unshielded single twisted pair cable.Currently, cars use low-voltage differential signaling (LVDS) cabling.LVDS is bulky, heavy and expensive.Single unshielded twisted pair Ethernet, on the other hand, can reduce connectivity costs by 80% and cabling weight by 30% because it's lighter and cheaper than traditional LVDS cabling. [caption id=&quot;attachment_659&quot; align=&quot;alignnone&quot; width=&quot;300&quot;] Unshielded twisted pair Ethernet cabling (right) vs.traditional LVDS cabling (left).Photo by Eric Lin.[/caption] Cable weight in a vehicle may seem trivial, but when you consider the fact that a typical car runs miles of wiring, 30% is pretty significant.This weight reduction has the potential to increase fuel economy. Because BroadR-Reach is based on mature high bandwidth and low-cost Ethernet technology, it can be easily integrated into existing systems.It will help set the groundwork for automotive companies to easily integrate the car into the consumer electronics experience and build functionality for driver assistance systems and infotainment. Broadcom, NXP Semiconductors N.V., Freescale Semiconductor and Harman International--along with auto makers BMW and Hyundai Motor Co.--formed a special interest group (SIG) to drive wide scale adoption of Ethernet-based automotive connectivity. BMW will have the first vehicle based on BroadR-Reach technology in 2013. Previously: In-Car Ethernet Paves the Way for New Features, Increased Efficiency [Video] Related: OPEN Alliance SIG, an organization designed to encourage wide scale adoption of Ethernet-based, single pair unshielded cable networks as</description>
      </item>
      <item>
         <title>Wireless Tech is Only the Beginning in the Connected Car</title>
         <link>https://www.broadcom.com/blog/automotive-technology-2/wireless-tech-is-only-the-beginning-in-the-connected-car/</link>
         <guid>https://www.broadcom.com/blog/automotive-technology-2/wireless-tech-is-only-the-beginning-in-the-connected-car/</guid>
         <pubDate>September 15, 2015</pubDate>
         <description>When consumers think about the nearly 1,000 chips that will be built into cars rolling out just five years from now, what they might envision are improved safety features, enhanced navigation capabilities and hundreds of data-gathering sensors. Theyre only partly right. These connected cars will have all of that, but will also have all of the wireless connectivity that consumers are not yet well-acquainted with.Broadcom is behind the wireless technologies that will deliver seamless smartphone integration, infotainment and advanced telematics to the dashboards of new cars. Today Broadcom announced a pair of new wireless connectivity chips that are optimized for the automotive market. Featuring the latest in 5G WiFi and Bluetooth Smart technology, Broadcom's automotive-grade wireless chips enable high-speed connectivity, device integration and dashboard infotainment systems in the connected car.These wireless technologies not only enable a more personalized experience for drivers and passengers, they help pave the way for safer roads for up-and-coming vehicle-to-everything communications. In just a few years, drivers and passengers can do things such as: Browse the Internet via Wi-Fi on their mobile devices Integrate and track biometric indicators by connecting their smartwatches and other wearables to the car Tap into the growing ecosystem of automotive applications (such as Apples CarPlay and Google Auto Link) Sync and stream multimedia content from the cloud to rear-seat displays Turn their ride into mobile LTE hot-spot via data-sharing plans from a mobile service provider Broadcom's approach tailors market-leading connectivity solutions to meet the stringent quality and environmental demands of the automotive industry, said Richard Barrett, Broadcom Director of Automotive Wireless Connectivity.Car makers and tier one suppliers now have immediate access to the latest in wireless connectivity technologies to keep pace with the rapidly evolving mobile and IoT ecosystem. [cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_right&quot;] That means they are designed, tested and manufactured in</description>
      </item>
      <item>
         <title>Helping the Environment through Energy-Efficient Product Design</title>
         <link>https://www.broadcom.com/blog/chip-design/helping-the-environment-through-energy-efficient-product-design/</link>
         <guid>https://www.broadcom.com/blog/chip-design/helping-the-environment-through-energy-efficient-product-design/</guid>
         <pubDate>April 16, 2012</pubDate>
         <description>Editor's Note: This post is part of our ongoing &quot;Executive Perspective&quot; series for Broadcom. It was authored by Neil Y. Kim, Executive Vice President of Operations and Central Engineering for Broadcom. It was scheduled for April to highlight Earth Day and Broadcom's commitment to environment-friendly technology. The original post is available in PDF format. Broadcom was founded 20 years ago with the vision of Connecting everything through innovative semiconductor solutions that let people communicate at home, at work and on the go. From our early days to our emergence as a Fortune 500 company, we've always made sure that our technology uses environmental resources responsibly. Broadcom's commitment to social responsibility is demonstrated in part through its commitment to eco-friendly product design. By providing solutions that help customers realize their own sustainability goals, Broadcom serves as a steward of the environment worldwide. The Energy Challenge Energy conservation and the development of energy-efficient IT products are critical challenges. Estimates show that more than 150 million metric tons of carbon dioxide are produced each year to power IT equipment. This represents about 10 percent of overall electricity demand, or $16 billion annually worldwide. This is a weighty expense to the environment and to the global economy. Whats even more alarming is that almost half of that energy is wasted by &quot;always on&quot; electronics that lack adequate power management capabilities. Industry trends suggest even more IT systems will be left on 24 hours a day, including business servers and network printers. In the last few years, &quot;always on&quot; systems have contributed to a steady increase in IT-related carbon dioxide emissions worldwide, and business-as-usual scenarios project a 130 percent rise in carbon dioxide emissions by 2050. Focus on Energy-Efficient Design Broadcom's long-standing initiatives for greener products and designs contribute to saving energy and reducing greenhouse</description>
      </item>
      <item>
         <title>Pay TV Goes Global: China on the Verge</title>
         <link>https://www.broadcom.com/blog/emerging-markets/pay-tv-goes-global-china-on-the-verge/</link>
         <guid>https://www.broadcom.com/blog/emerging-markets/pay-tv-goes-global-china-on-the-verge/</guid>
         <pubDate>October 31, 2012</pubDate>
         <description>Chinese consumers are poised for a major change in the way they watch TV. As it stands, dozens of operators duke it out by region with a patchwork of competing servicescable, Internet Protocol TV (IPTV), satellite and passive optical networking (PON), to name a few.A fragmented market is the norm in the country of more than a billion people, bit thats changing as TV offerings are set to converge around broadband technologies. As the worlds most-populous nation with a rising middle class, China is on the verge of a national initiative called next-generation broadband (NGB). Broadcom's setting the stage for operators to deploy NGB, along with an array of exciting offerings that combine the best of broadcast, broadband and over-the-top (OTT) content to their subscribers. It couldnt be happening at a better time: A Digital TV Research report from earlier this year shows Chinas expected to have some 315 million pay-TV households by 2017.China has some 167 million broadband subscribers, with a 16 percent annual growth rate, according to October figures published by the Broadband Forum. Broadcom has developed several standards-based technologies to deliver high quality, high-definition content in a secure, cost-effective way, while helping put in place infrastructure to ramp up Chinas burgeoning broadband landscape.Were talking about it this week at the International Coverage and Transmission Conference in Hangzhou, where well demonstrate the latest in NGB innovations tailored for China. Heres a sampling of what well be demonstrating at ICTC. Standards Make it Happen Broadcom's a frontrunner in standards-setting for NGB, including active participation in the NGB working group led by China's State Administration of Radio, Film and Television agency (SARFT), which oversees the broadcasting industry. Were also working in concert with other connected home organizationssuch as MoCA, HomePlug and others, to bring standardized technologies to Chinese homes. Security</description>
      </item>
      <item>
         <title>WebTV at CES means No living room PC, Just high-performance streams</title>
         <link>https://www.broadcom.com/blog/webtv-at-ces-no-living-room-pc-just-high-performance-streams</link>
         <guid>https://www.broadcom.com/blog/webtv-at-ces-no-living-room-pc-just-high-performance-streams</guid>
         <pubDate>January 4, 2012</pubDate>
         <description>It's hard to say &quot;Web TV&quot; in front of anyone at the Consumer Electronics Show - a place where attendees have been hearing about the concept for more than a decade but seeing few of those concepts gain real traction.Internet-powered television looks nothing like the old dayswhen a PC tower resided in an entertainment center on display in the CES parking lot. This time around, things are different.This experience brings the power of the Internet - in the form of streaming movies, videos, music and photos - to the living room.And it works with the digital TV you already have. At CES, Broadcom will be showcasing a system-on-a-chip technology that has the power to turn nearly 2 billion non-connected digital TVs worldwide into Internet-connected Smart TVs.Designed for Hybrid TVs and Over-the-Top (OTT) media players, the chip's power is in the software platform, which is capable of supporting a wide range of traditional TV, such as what's offered with satellite or cable systems, as well as streaming content delivered over the Internet.In its most optimized state, the chip has a high performance CPU, advanced software integration, and dual HD decoding and dual transcoding.That takes the experience to the next level: using the technology to offer video conferencing, as well as video support for tablets and smart phones. The chip could be instrumental in putting adoption of the technology on the fast-track.Netgear's NeoTV Streaming Player, for example, already utilizes Broadcom's technology to deliver streaming services - such as Netflix, Vudu, Pandora and others - to the TV screen.Keith Nissen, a research director at In-Stat, said that the OTT video segment is seeing continued growth and that barriers, such as cumbersome user interfaces, are being addressed. &quot;Even stronger growth of Internet Video-on-Demand (iVOD) and Electronic Sell-Through (EST) video services is possible if device</description>
      </item>
      <item>
         <title>5G WiFi: The Next Big Thing</title>
         <link>https://www.broadcom.com/blog/ces/5g-wi-fi-the-next-big-thing/</link>
         <guid>https://www.broadcom.com/blog/ces/5g-wi-fi-the-next-big-thing/</guid>
         <pubDate>January 11, 2012</pubDate>
         <description>CES is all about showcasing the latest and greatest in consumer technology.The next big thing on the wireless technology front is undoubtedly the fifth generation of Wi-Fi: 802.11ac, or the more consumer-friendly term, 5G WiFi. Last week, Broadcom announced its first family of 5G WiFi chips, which are rated for 1.3 Gbps at the PHY level, with an actual throughput of 800 Mbps to 1 Gbps.These speeds make 5G WiFi comparable to wired gigabit Ethernet, as well as up to three times faster than its 802.11n counterpart Broadcom's demonstration shows over 800 Mbps of actual throughput on a 3x3 MIMO setup.At this speed, you can transfer a 4.7 GB DVD in less than 50 seconds! Some features of 5G WiFi that make this possible include: 5 GHz-exclusive spectrum: Because 802.11ac only operates on the 5 GHz band, it is subject to a much cleaner environment than the 2.4 GHz band, which is populated by microwaves, cordless phones, wireless game controllers, and Bluetooth devices.Less traffic means less chance for collisions and higher transmission rates. High-density modulation: 802.11ac supports 256 QAM, whereas 802.11n uses less efficient 64 QAM.This means that more data is squeezed into the same transmission for higher throughput. Beamforming: While this was an optional feature in 802.11n, beamforming has been standardized in 802.11ac.Beamforming is the ability to control the direction of propagation of wireless signals, thus giving access points more ability to minimize Wi-Fi dead spots, and well as improving the maximum range of Wi-Fi coverage. Energy savings: Because 5G WiFi has higher throughput, the same amount of data can be transmitted in less time.This means the chip spends more time in idle mode, which leads to greater power efficiency. Expect to see 5G WiFi products hitting the shelves in the next few months. Previously: 5G WiFi Unveiled at</description>
      </item>
      <item>
         <title>A Match Made in (Entertainment) Heaven: Broadcom + DLNA</title>
         <link>https://www.broadcom.com/blog/ces/a-match-made-in-entertainment-heaven-broadcom-dlna/</link>
         <guid>https://www.broadcom.com/blog/ces/a-match-made-in-entertainment-heaven-broadcom-dlna/</guid>
         <pubDate>January 24, 2012</pubDate>
         <description>The Digital Living Network Alliance (DLNA) shares the same vision as Broadcom: to enable a truly connected home. Both envision a time when there's a widely-adopted, interoperable network of consumer devices that enable a seamless environment for sharing and growing new digital media and content services. A leader in driving key standards and transcoding integration to meet industry and consumer demand for TV, Internet, and video everywhere, Broadcom supports DLNA Premium Video together with MoCA, HomePlug and Wi-Fi on our connected home platforms to fuel pioneering connected experiences in the home and on-the-go. At CES 2012, DLNA introduced Premium Video where service providers can allow consumers to stream their favorite television programs and movies to DLNA Certified products such as networked set-top boxes, digital televisions, tablets, mobile phones, Blu-ray disc players and video game consoles. DLNA Premium Video enables the streaming of television programs and movies to multiple DLNA Certified products.For instance, you can begin watching a favorite television show in the living room on your DLNA Certified television and then continue the same program hours later in your bedroom on a DLNA Certified tablet.Content will be delivered by service providers to a single set-top box or gateway that connects to your home network, reducing the clutter of multiple set-top boxes in the home. Connected devices are taking on an important role in the home, providing consumers with additional choices for what products to use, and where to place them, within their homes, said Jason Blackwell, practice director, digital home, ABI Research. &quot;We project that DLNA member companies, such as ACCESS and Broadcom, will continue to support and expand their offerings, driving the next generation digital home ecosystem. In the video demo below, Broadcom Blog Squad member Prashant Mantha interviews Alan Messer, the head of Advanced Technology Lab SISA at</description>
      </item>
      <item>
         <title>Global Connectivity Converges at CES 2013</title>
         <link>https://www.broadcom.com/blog/ces/global-connectivity-converges-at-ces-2013/</link>
         <guid>https://www.broadcom.com/blog/ces/global-connectivity-converges-at-ces-2013/</guid>
         <pubDate>January 3, 2013</pubDate>
         <description>Its a big world out there.Broadcom recognizes that the love of and the demand for the latest and greatest consumer electronics technologies knows no geographic bounds. As Broadcom heads to Las Vegas for the International Consumer Electronics show this week where the gadget pageantry is set to dominate the tech media in the U.S. were looking beyond Western borders. Broadcom's products are found everywhere on the planet, from urban corporate data centers and the cloud, to the most isolated villages. At CES, Broadcom is set to explore what it means to have a truly Connected Life, where mobility, Internet access and connectivity converge in a seamless way whether you are at home, at work, or on the go. In the past year, Broadcom has announced innovative breakthroughs for emerging markets across a number of categories including the growth of affordable smartphones, the increasing demand for robust cable TV infrastructure and the proliferation of broadband around the world.Its all part of Broadcom's commitment to Connecting Everything. Among the examples: China: Last year, Broadcom announced key infrastructure technologies that are modernizing the pay-TV landscape in the worlds most populous country.New innovations in cable developed by Broadcom engineers, ushered in the DOCSIS EoC in China.The result is something that specifically addresses Chinas government-mandated Next Generation Broadband, or NGB, initiative with a cost-effective and high performance way of implementing high-speed cable networks. Latin America: For a growing number of TV subscribers in Latin America, TV will become an interactive hub that will allow consumers to check their online bank accounts, purchase the latest designs spotted on a favorite telenovela character, or learn more about the upcoming World Cup.Thanks to Broadcom technology, Brazilians are being ushered into the digital TV age. Broadcom's technology delivers ISDB-T digital TV broadcasts with faster data speeds, lower power consumption</description>
      </item>
      <item>
         <title>New 16Gb Fibre Channel (16GFC) Benchmarks Reveal Some Surprising Results</title>
         <link>https://www.broadcom.com/blog/16gfc-benchmarks-reveal-surprising-results</link>
         <guid>https://www.broadcom.com/blog/16gfc-benchmarks-reveal-surprising-results</guid>
         <pubDate>January 15, 2013</pubDate>
         <description>Recent Host Bus Adapter (HBA) testing by Demartek labs comparing Emulex and QLogic 16GFC HBAs yielded some very surprising results. While we already knew the Emulex LPe16000B delivered exceptional performance – with more than 1.2 million I/O operations per second (IOPS) on a single port1 – and is shipped by all major OEMs today, it was the QLogic performance that had us all scratching our heads. Demartek found that the newly released QLogic QLE2672 performance fell significantly short of its advertising claims—in some cases demonstrating worse performance for its 16GFC adapter than its 8GFC predecessor. So we tested them again, and again to be sure, and we could not come anywhere close to QLogic’s claimed 16GFC performance. While testing both adapters, Demartek found the following results. Here are a few of the report highlights: Emulex LPe16000B was by far the fastest HBA evaluated, enabling 1.2 million IOPS on a single-port Emulex delivered 7x better IOPS than QLogic, which delivered under 200k IOPS on one port and requires both ports to reach its maximum IOPS 323,000. The Emulex architecture enables all resources to be applied to a single-port, enabling 1.2 IOPS on a single-port when needed.2 The LPe16002B runs at nearly full line rate for SQL Server/ Oracle workloads (4k and 8k block sizes) so Service Level Agreements (SLAs) are met. The LPe16002B is up to 124 percent faster for Oracle (4K block sizes) and up to 137 percent faster for SQL Server (8K block sizes) environments. The QLE2672 can’t achieve full line rate until it reaches 16k data block sizes. This is a significant deficiency if you happen to be running Oracle and SQL Server workloads. 2 CPU efficiency testing evaluated the amount of I/O being performed by the HBA in relation to the server CPU utilization being consumed. The</description>
      </item>
      <item>
         <title>RDMA and Network Offloading: Accelerate Workloads and REduce Cost with Emulex VFA5 on Flex System</title>
         <link>https://www.broadcom.com/blog/rdma-and-network-offloading-accelerate-workloads-and-reduce</link>
         <guid>https://www.broadcom.com/blog/rdma-and-network-offloading-accelerate-workloads-and-reduce</guid>
         <pubDate>February 11, 2015</pubDate>
         <description>
	Today, we are happy to have a guest blog from Kevin Bossman, 
product marketing manager at Lenovo as he talks about the Emulex Virtual Fabric Adapter 5 for Flex System servers and how, with RDMA and network offloading, Emulex and Lenovo are accelerating workloads for today’s high-performance data centers.

	 

	~~

	 

	By Kevin Bossman

	 

	Flex System offers high-performance Ethernet and converged networking switches and adapters that can fit into your existing network and future IT environment. These highly flexible products coupled with on-demand scalability offer an easy way to scale as your IT requirements grow. Lenovo offers an array of networking communication technologies ranging from Ethernet to Fibre Channel to InfiniBand to iSCSI.

	 

	Today, we are witnessing explosive growth of data communication facilitated by expansion and proliferation of Internet access across the globe. Considering this gigantic requirement, the natural question that comes to mind is, “What is the common technology feature that supports such unprecedented growth of data?“ The most common answer is Ethernet. Emulex is helping to show why Ethernet is quickly becoming the faster-growing data communication device. In this blog, I will discuss how two new features from Emulex’s new Virtual Fabric Adapter 5 (VFA5) chipset are facilitating this trend.

	 

	Read more here on the Lenovo blog!
</description>
      </item>
      <item>
         <title>NFV Performance &amp; ROI Rockets With New Features on Emulex 10Gb &amp; 40GbE Adapters</title>
         <link>https://www.broadcom.com/company/blog/nfv-performance-roi-rockets-new-features-emulex-10gb-40gbe-adapters</link>
         <guid>https://www.broadcom.com/company/blog/nfv-performance-roi-rockets-new-features-emulex-10gb-40gbe-adapters</guid>
         <pubDate>June 11, 2014</pubDate>
         <description>Telcos around the world have long recognized the value of Emulex I/O solutions but today, Emulex has announced planned advanced features that will enable them to more quickly develop and deliver quality network services to customers. The new features include packet processing enhancements and enhanced programmability for the Emulex OneConnect® OCe14000 series of 10Gb and 40Gb Ethernet (10/40GbE) Network and Converged Network Adapters (CNAs). The new features enable telecom equipment manufacturers (TEMs) and telecom operators to accelerate the deployment of Network Functions Virtualization (NFV) solutions. 6WIND and Emulex have partnered to develop the Emulex Poll Mode Driver (PMD) for increased packet processing and performance updates to the Data Plane Development Kit (DPDK), a set of data plane libraries and network interface controller drivers for fast packet processing on industry standard servers. These updates are based on the new Emulex open SURF application programming interface (API) which speeds performance and deployment of NFV workloads. The Benefits for Telcos: Optimizes ROI for NFV deployments with better programmability, performance and hardware offloads for standard high volume servers Simplifies management by deploying one 10/40GbE connectivity solution across fixed, mobile and over the-top (OTT) networks Accelerates deployment of virtualized networks and SDN Delivers capital and operational expenditure (CAPEX and OPEX) savings by deploying a convergence strategy – Ethernet, Fibre Channel over Ethernet (FCoE) and iSCSI Improves customer experience with high-performance 10/40GbE connectivity Advanced programmability with Emulex SURF API: The Emulex SURF API provides direct access to the networking processing capabilities of Emulex 10/40GbE adapters. With enhanced Layer 3 functionality and better programmability, the Emulex SURF API enables telecom providers to set parameters for queues and traffic steering, in order to build high performance and more cost effective applications for evolving NFV workloads. By deploying Emulex 10/40GbE adapters with the Emulex SURF API, telecom operators can</description>
      </item>
      <item>
         <title>High Speed Networks and Gen 6 Fibre Channel Go Together</title>
         <link>https://www.broadcom.com/company/blog/high-speed-networks-and-gen-6-fibre-channel-go-together</link>
         <guid>https://www.broadcom.com/company/blog/high-speed-networks-and-gen-6-fibre-channel-go-together</guid>
         <pubDate>March 1, 2016</pubDate>
         <description>Flash storage system users have solved their storage performance bottlenecks but in doing so have seen performance bottlenecks shift to the surrounding network infrastructure. Some users report that their 8Gb Fibre Channel (8GFC) switches and host bus adapters (HBAs) have difficulty pushing the load fast enough into the flash arrays to exercise all the SSDs, limiting scalability and hampering ROI. Because flash arrays have such high IOPS performance boundaries with lower latency, you can easily see how upgrading to the new faster, low-latency Emulex Gen 6 HBAs is key to solving these network bottlenecks. The new Emulex Gen 6 LPe31000 and LPe32000 series 16GFC and 32GFC HBAs deliver the best Gen 6 HBA performance-up to 1.6 million IOPS on single-port and half the hardware latency of the previous generation. Notable performance improvements have been achieved by switching out 8GFC HBAs and switches with Emulex Gen 6 HBAs and Brocade Gen 6 switches, even when connected to an older 8GFC flash array. TPC-H benchmark testing by Demartek has shown that data warehousing transactions completed in ¼ of the time vs. 8GFC. Imagine how completing queries in minutes vs. hours will impact the bottom line of businesses? But performance is only part of what makes upgrading to Gen 6 FC so attractive. Features such as Emulex ExpressLane, Secure firmware updates and Forward Error Correction improve performance, reliability and security. Support for Brocade diagnostics such as ClearLink (D_port) and the Brocade I/O Insight suite of IO performance monitoring tools provide the advanced features that Enterprises need to maintain SLAs. With continued advances such as these, it’s easy to understand why over 80% of flash customers use FC1, and 90% of Fortune 1000 companies trust their mission-critical data to Fibre Channel according to Demartek2. Learn more about the Emulex Gen 6 HBAs, available later</description>
      </item>
      <item>
         <title>Rich Nelson in Rapid TV News: How Consumer Viewing Habits are Driving Innovation</title>
         <link>https://www.broadcom.com/blog/home-entertainment/rich-nelson-in-rapid-tv-news-how-consumer-viewing-habits-are-driving-innovation/</link>
         <guid>https://www.broadcom.com/blog/home-entertainment/rich-nelson-in-rapid-tv-news-how-consumer-viewing-habits-are-driving-innovation/</guid>
         <pubDate>March 9, 2013</pubDate>
         <description>Editor's Note: Broadcom experts often weigh in on popular topics on industry sites around the Web. Below is a reprint of a story that appeared in Rapid TV News, in which Rich Nelson, Senior Vice President of Marketing for the Broadband Communication Group at Broadcom, talks about UltraHD TV. From Rapid TV News: Since the advent of the digital television in 1993, consumers have demonstrated their appetite for an ever-increasing screen size, sleeker displays and better viewing experiences. And while technological innovations have emerged steadily over the past twenty years, in the past five years alone the market has experienced a major leap forward in innovation, driven by radical changes in consumer viewing habits. Today, at the same time consumer desire for live TV, especially sporting events and premieres, holds steady, there is also a measurable uptick in consumption of streaming video and over-the-top (OTT) content. And streaming content is no longer limited to the TV, as consumers engage with multiple screens in tandem via their laptop, tablet or smartphone. A recent report from analyst firm Infonetics estimates that 83% of viewers are engaged in multiscreen viewing, yet only 39% of service providers today offer support for streaming content to those secondary screens. In fact, the popularity of free streaming video, OTT subscription services and the onslaught of Internet-connected devices has driven a rapidly growing demand for bandwidth. So how are service providers preparing to deliver the bandwidth necessary to support consumer demand for multiscreen viewing? Many are looking to a new industry-wide compression standard called HEVC or H.265, a critical enabler for reducing bandwidth and doubling of coding efficiency necessary for multiscreen viewing. HEVC also enables the delivery of UltraHD, which at 4x the resolution of today's HD TVs produces obvious advantages for home viewers. With display resolutions of</description>
      </item>
      <item>
         <title>OPEN Special Industry Group Releases Ethernet Specs</title>
         <link>https://www.broadcom.com/blog/automotive-technology-2/open-special-industry-group-releases-ethernet-specs/</link>
         <guid>https://www.broadcom.com/blog/automotive-technology-2/open-special-industry-group-releases-ethernet-specs/</guid>
         <pubDate>April 16, 2015</pubDate>
         <description>Ethernet has won strong support from automakers and their suppliers because of its speed, low cost and use of secure networking standards that are as common as the ubiquitous blue cables in peoples homes and offices.

Broadcom, a founding member of the OPEN Alliance Special Industry Group (which stands for One-Pair Ether-Net) and has been staunchly behind the push for Ethernet as a standard for connectivity in the car since 2011.

Earlier this month, the non-profit group gave a boost to its cause  and a nod to the open source community  by releasing its automotive Ethernet specifications, which Broadcom has branded BroadR-Reach, to the automotive developer community for download.

Based on its high-bandwidth, price-performance, ubiquity and inherent network security features, use of automotive Ethernet is on a significant trajectory, Natalie A.Wienckowski, General Motors' Architect - Electronics Hardware Global Lead and OPEN Alliance SIG Chair, said in a statement.By making our specifications widely available, we can further drive wide-scale adoption of the technology throughout the automotive ecosystem.

[cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_left&quot;]

The group has seen other milestones in the past few years, gaining more than 200 member companies and realizing reduction in power usage for the networking technology and its expansion into new automotive features, such as shark fin antennas, telematics systems and head-unit instrumentation stacks.

The released specifications are set to push the reach of automotive Ethernet even further by trumpeting the work of numerous technical committees focused on driving interoperability, compliance and testing requirements.

This month, the OPEN SIG introduced its tenth technical committee in conjunction with the IEEE, focused on further reductions in energy consumption, which is focused on establishing a power-saving sleep mode for automotive use cases.</description>
      </item>
      <item>
         <title>Broadcom at CCBN: The China TV Blitz Begins</title>
         <link>https://www.broadcom.com/blog/emerging-markets/broadcom-at-ccbn-the-china-tv-blitz-begins/</link>
         <guid>https://www.broadcom.com/blog/emerging-markets/broadcom-at-ccbn-the-china-tv-blitz-begins/</guid>
         <pubDate>March 19, 2012</pubDate>
         <description>The Pay-TV industry in China is coming of age, with TV, broadband and phone services now being bundled into a single package for Chinese consumers.With Broadcom's commitment to powering next-gen TV and Internet experiences around the globe, it only makes sense that we would have a presence at the China Content Broadcasting Network (CCBN) conference in Beijing later this week. Consumers in China are hungry for new advanced services there, things like video chat over the TV screen.The potential market for adoption is great.There are about 1.3 billion people in China, with a growing middle-class population of about 300 million, nearly the size of the U.S.population.In addition, the government is working to convert some 202 million households to digital by 2015 and is starting to offer additional HD channels and even some 3D programming, according to China-based BDA Research.On the Internet side, the number of connected users is expected to grow at a rate of about 10 percent per year over the next five years.The market is ready for more broadband, HDTV and network convergence. Broadcom is helping to set up the infrastructure to make this happen - at a lower cost and higher quality.What we call &quot;DOCSIS-based EoC&quot; is using a standardized and interoperable system that ensures a quality TV and Internet experience for Chinese consumers.Broadcom has help from top customers and partners in the area that further helps China to &quot;Get Connected.&quot; Broadcom will also be showcasing the connected home ecosystem that fuels multi-screen TV experiences, such as delivery of TV and Internet streams to mobile devices such as tablets and smartphones.With support for key home networking technologies and standards like Wi-Fi, HomePlug AV, MoCA and DLNA, users enjoy content on any screen they choose.With Broadcom's ability to integrate transcoding on its platforms, as well as design &quot;Full-Band</description>
      </item>
      <item>
         <title>Hands On with the uWand, Changing the Remote Control Experience with Broadcom Tech [Video]</title>
         <link>https://www.broadcom.com/blog/ces/hands-on-with-the-uwand-changing-the-remote-control-experience-with-broadcom-tech/</link>
         <guid>https://www.broadcom.com/blog/ces/hands-on-with-the-uwand-changing-the-remote-control-experience-with-broadcom-tech/</guid>
         <pubDate>January 10, 2013</pubDate>
         <description>Even the laziest of couch potatoes might agree that pressing buttons on the TV remote is the most passive way of interacting with the latest generation of smart TVs.

Thankfully, gaming systems like the Nintendo Wii (with its gesture-based Wiimotes) have ushered in a new understanding of how consumers can get hands-on with the tube.

At the International Consumer Electronics Show this week, Philips is showcasing its partnership with Broadcom and others by demoing a sophisticated new gesture-based remote control, called uWand.

The uWand's marketing pitch is dead-simple: &quot;We all know how to point,&quot; one of its glossy brochures showed.

The experience involves simple gestures to navigate across the TV, using the full screen to navigate through an on-screen channel guide or even journey through a video game quest.

Its too tedious to press down, down, left, left to get somewhere, but the uWand brings simple gestures to get to the content said Navin Natoewal, general manager of Media Interaction and Intellectual Property &amp; Standards at Netherlands-based Philips.

We spent some time with Natoewal and the uWand team at the Philips booth at CES and got a first-hand look at how these gestures  flicking, swiping and waving  are changing the user experience.At the heart of it all is Broadcom's  BCM7425, a set-top-box platform thats integrated with uWand drivers in the set-top boxes in most living rooms.

Check out the video below to learn more about uWand and, of course, to see it in action:



Not heading to Vegas? Get the latest CES news from Broadcom and our partners by liking us on Facebook, following us on Twitter and reading the blog.

Related:

	Near Field Communication is a (Video) Game Changer for Wii U
	Tech Overdrive: Inside the Broadcom Booth at CES
	Engadget: Philips uWand Motion-Sensing STB Remote Hands-On [Video]
</description>
      </item>
      <item>
         <title>The Benefits of Network Virtualization Offload Technologies to Optimize Performance for NVGRE | Emulex Labs</title>
         <link>https://www.broadcom.com/company/blog/benefits-network-virtualization-offload-optimize-nvgre-performance</link>
         <guid>https://www.broadcom.com/company/blog/benefits-network-virtualization-offload-optimize-nvgre-performance</guid>
         <pubDate>June 3, 2013</pubDate>
         <description>As we have discussed before, NVGRE or Network Virtualization using GRE (an informative RFC) defines how to build virtual networks in Hyper-V environments. Virtual Network Fabrics (VNF) creates a virtual network infrastructure where a virtual machine (VM) can be created and moved without any limitations that would be imposed by the legacy network infrastructure. With NVGRE, VMs live on a single virtual network defined by a Tenant Network ID (TNI) in the NVGRE virtual network. VMs can be moved from any physical server to any physical server and NVGRE creates a virtual L2 network across physical L3 boundaries so that the VM is able to keep its MAC and IP address no matter where it moves. Furthermore, network configuration becomes automated so that any network changes required to create a new VM can be done in minutes instead of days. This improves the agility of private and hybrid cloud infrastructures and lowers the costs of network management for private and hybrid cloud environments. It is important to note that NVGRE can be implemented in software and the solution works well on Converged Network Adapters (CNAs) and Network Interface Cards (NICs) provided by Emulex. That said, as Microsoft stated in their presentations at the 2011 Windows Build Conference (here and here and slides here) NIC participation in NVGRE, specifically offloads, to build encapsulated packets for NVGRE, is essential. Today, without NIC participation, GRE breaks today’s task offloads, which disables nearly 10 years of NIC enhancements that improves performance for high performance Ethernet networks. Specifically, NVGRE breaks LSO and other NIC performance optimizations. This creates a performance penalty, as illustrated in the table below. Note: These test results are illustrative in nature and will vary based on VM density, server configuration, and other test parameters. Basically, network throughput is reduced by 27%</description>
      </item>
      <item>
         <title>What Happens When You Cross an Open Source Tool with an Enterprise Data Center?</title>
         <link>https://www.broadcom.com/blog/what-happens-when-you-cross-an-open-source-tool-with-an-enterpri</link>
         <guid>https://www.broadcom.com/blog/what-happens-when-you-cross-an-open-source-tool-with-an-enterpri</guid>
         <pubDate>November 3, 2014</pubDate>
         <description>The emergence of new technologies and frameworks, many open source, has created a plethora of opportunities for customers to build innovative architectures for cloud computing, big data and new data storage paradigms. We are seeing this with frameworks such as OpenStack, repositories such as Hadoop, and object storage technologies such as Ceph, that customers are hungry for solutions, and even more hungry for knowledge on how to implement them. Each framework or technology brings with it a whole new opportunity for vendors, such as Emulex, to educate our customers with design guides, best practices and reference architectures. To facilitate this, Emulex today introduced a new line of solutions called ExpressConfigTM. The family of Emulex ExpressConfig solutions enables IT professionals to optimize I/O connectivity for cloud, Web-scale and enterprise environments. Designed for use with the Emulex OneConnect® OCe14000 family of 10Gb and 40Gb Ethernet (10GbE and 40GbE) Network Adapters and Converged Network Adapters (CNAs), ExpressConfig solutions will help customers get the most value out of the pre-designed integrated system. The solutions will include: Integrated software drivers for OpenStack Neutron, Cinder and Horizon modules Multi-function feature validation confirming key features that work as expected, in combination with other components Performance characterization for important I/O metrics Solution design guides with configuration techniques and best practices Solution specific technical services and support Solution Blueprints for OpenStack Working with its partners, Emulex created ExpressConfig Solutions for advanced storage, networking, and big data applications that will be included with upcoming OpenStack releases. These solution blueprints build upon current capabilities to increase tenant density and resource utilization while improving performance, end-to-end Quality of Service (QoS) policy enforcement and overall power consumption. The solution blueprints will address key pain points such as storage performance and scalability, virtualization and CPU overhead, tenant vulnerability to “noisy neighbor” effects that misallocate</description>
      </item>
      <item>
         <title>How to Enable NPIV for Emulex OneConnect UCNA Adapters Configured for FCoE</title>
         <link>https://www.broadcom.com/blog/how-to-enable-npiv-for-emulex-oneconnect-ucna-adapters-configure</link>
         <guid>https://www.broadcom.com/blog/how-to-enable-npiv-for-emulex-oneconnect-ucna-adapters-configure</guid>
         <pubDate>November 28, 2011</pubDate>
         <description>
	
Recently, I was asked how to enable N_Port ID Virtualization (NPIV) for our high-performance Emulex OneConnect 10Gb Universal Converged Network Adapters (UCNAs) configured for Fibre Channel over Ethernet (FCoE) Searching through the Emulex documentation pages as the requester did, I was also unable to locate any information on this configuration. I didn’t think this could be any more difficult than configuring Fibre Channel, so I thought I’d take a stab at it. A Microsoft Windows Server 2008 host was used with an Emulex OneConnect OCe10102 adapter, with Emulex OneCommand Manager 5.2.12.1 and 5.2.12.2 for one FCoE port. Since our adapters have two ports, you would perform the steps below for the second port.

	Here we go:

	 

	
		Open OneCommand Manager and select “View” from the drop down menus and select “Group” by adapters
	
		Select the FCoE port
	
		Select the Driver Parameters tab
	
		From the Adapter Parameter, left mouse click once to select Enable NPIV

	
		Select “Enable” from the Modify Adapter Parameter. This will make the Adapter Parameter turn red, requiring a reboot. Because it will only enable one port, a reboot will also be required for the second port.

	
		Select “Apply” and reboot the server
	
		When the server comes back up, login to your Windows server and open OneCommand Manager
	
		Select “View” then “Group Adapters by Virtual Port”

	
		Select the FCoE port and you should now be able to create your virtual ports

	
		Select “Create Virtual Port” and a new virtual port confirmation window will appear

	
		As shown in the image below, the new virtual port will appear just below the physical port



	 

	I hope this helps. If you still have questions, please contact Emulex technical support.
</description>
      </item>
      <item>
         <title>Win a Roku 2 Courtesy of Broadcom in Engadget's Holiday Blues Buster 2011 Contest</title>
         <link>https://www.broadcom.com/blog/home-entertainment/win-a-roku-2-courtesy-of-broadcom-in-engadgets-holiday-blues-buster-2011-contest/</link>
         <guid>https://www.broadcom.com/blog/home-entertainment/win-a-roku-2-courtesy-of-broadcom-in-engadgets-holiday-blues-buster-2011-contest/</guid>
         <pubDate>December 19, 2011</pubDate>
         <description>Have you seen the cool contest that Broadcom is sponsoring with Engadget this week?

Check it out!

The contest runs through Friday and includes three great giveaways courtesy of Broadcom.

The full list of great giveaways for the week are as follows:

December 19 - Roku 2 from Broadcom
December 20 - Unlocked GSM iPhone 4S from Wyse
December 21 - Unlocked Samsung GT-I9100 Galaxy S II (international version) from Broadcom
December 22 - Verizon-branded Samsung Galaxy Nexus LTE from Appitalism
December 23 - iPad 2 WiFi 16GB from Broadcom

To enter, leave a comment on the Engadget blog post, so be sure to click the link above and comment away!

While you're at it, check out more about what Broadcom is doing in the run-up to the Consumer Electronics Show 2012 by liking our Facebook Page! 

Good luck!!

Photo: Engadget</description>
      </item>
      <item>
         <title>Broadcom and CRI Thwart Set-Top Box Content Pirates</title>
         <link>https://www.broadcom.com/blog/home-entertainment/broadcom-and-cri-thwart-set-top-box-content-pirates/</link>
         <guid>https://www.broadcom.com/blog/home-entertainment/broadcom-and-cri-thwart-set-top-box-content-pirates/</guid>
         <pubDate>January 3, 2013</pubDate>
         <description>In the age of on-demand television where DVRs, Netflix and mobile streaming give viewers instant access to their favorite programming the broadcasters themselves havent moved much.Shaken by the piracy that has been compromising their content since the arrival of the Internet, broadcasters have been almost overly cautious about protecting their programming. They're kind of justified.Take this recent example: The uber-popular HBO series &quot;Game of Thrones&quot; saw so much pirating that the number of downloads per episode almost matched the average number of paying viewers.This practice ends up hurting everyone involved, including broadcasters (who'll earn less through advertising or subscription fees to produce their shows) and viewers (who get lower-quality programming). In response, cable and satellite providers are getting savvier about the set-top boxes in their customers living rooms, which are put in place to transmit digital television content and manage access to paid premium channels.Those set-top boxes will be hotly contested as TV becomes more Internet-based. Broadcom is throwing its hat into the ring against piracy at this week's International Consumer Electronics Show in Las Vegas, where it unveiled a critical licensing agreement that will enable advanced security measures across its entire line of set-top box platforms. Broadcom's hacking countermeasures provided by a licensing agreement with San Francisco's Cryptography Research Inc. (CRI) work by providing security at the chip level and put up a protective wall that secures premium content before pirates can get to it.That's better than the majority of what's out there today, which relies on (sometimes easy-to-corrupt) software loaded onto set-top boxes. The partnership with CRI aims to cut down on a certain class of attacks known as Differential Power Analysis, or DPA, and stopping the information leakage from chips that allow for so-called &quot;side channel attacks.&quot; These security features should give broadcasters the peace of mind</description>
      </item>
      <item>
         <title>Connected Home Primer: Broadcom Supports Basket of Technologies</title>
         <link>https://www.broadcom.com/blog/ces/connected-home-primer-broadcom-supports-basket-of-technologies/</link>
         <guid>https://www.broadcom.com/blog/ces/connected-home-primer-broadcom-supports-basket-of-technologies/</guid>
         <pubDate>January 12, 2012</pubDate>
         <description>Weve already talked about how Broadcom is helping connect the car. Today well take a look at Broadcom's role in connecting the home. Broadcom's set-top box chips support four technologies for home networking: Wi-Fi (IEEE 802.11), Ethernet (IEEE 802.3), HomePlug powerline networking (IEEE P1901), and MoCA (coaxial networking). A little about each technology: 802.11(a/b/g/n/ac) Wi-Fi With an abundance of mobile devices such as notebooks, tablets, and smartphones in the market today, its no wonder that Wi-Fi is such a popular choice for the home network.It provides the most versatile networking solution. Broadcom has a long history of producing chips in support of various Wi-Fi standards. Recently, Broadcom announced its first line of 802.11ac-compliant 5G WiFi chips. [caption id=&quot;attachment_871&quot; align=&quot;alignleft&quot; width=&quot;300&quot;] HomePlug AV adapter, connected home technology supported by Broadcom.Photo by Willy Wong.[/caption] HomePlug AV HomePlug AV is the powerline networking specification developed by the HomePlug Powerline Alliance, a trade association led by 70 industry members including Broadcom.It certifies the IEEE 1901 standard for powerline networking. Powerline networking uses existing residential electrical wiring to create a home network.HomePlug AV adaptors offer a plug-n-play solution to easily bridge a home Ethernet network over any power outlet.The HomePlug AV specification supports up to 200Mbps, fast enough to support common 100Mbps Ethernet. Yesterday, Broadcom joined Qualcomm Atheros in officially endorsing HomePlug AV as the powerline networking technology of choice in the home. MoCA (1.1/2.0) The Multimedia over Coaxial Alliance is a trade group of which Broadcom is a member - that promotes a standard that uses coaxial cables to connect the home entertainment network.Coaxial cabling is attractive because it can support very high bandwidths and is already installed in most people's homes. Many of the big cable providers support MoCA, including Charter, Comcast, Cox, DirectTV, Dish Network, Time Warner Cable, and Verizon FiOS. Broadcom</description>
      </item>
      <item>
         <title>Rich Nelson in IPTV News: Five Things That Need to Happen for Ultra HD to Take Off</title>
         <link>https://www.broadcom.com/blog/rich-nelson-in-iptv-news-five-things-that-need-to-happen-for-ul</link>
         <guid>https://www.broadcom.com/blog/rich-nelson-in-iptv-news-five-things-that-need-to-happen-for-ul</guid>
         <pubDate>May 18, 2014</pubDate>
         <description>Editor's Note: Broadcom experts often weigh in on popular topics on industry sites around the Web. Below is a reprint of a story that appeared in IPTV News, in which Rich Nelson, Senior Vice President of Marketing, Broadband Communications Group, talks about understanding the role of wearable technology in healthcare. From IPTV News: Hindsight may be 20/20, but the picture is crystal clear for the future of television. Just look at the past 20 years of the industry: At each step forward in the evolution of the television, picture quality has improved, content has become more diverse and the viewing experience has become even more engaging. For those who remember the experience of watching analogue programs on a black-and-white television, the transition to digital television seemed liked quite an impressive accomplishment, offering a much higher quality picture and a more dynamic entertainment experience. Today, however, the advent of Ultra High Definition (HD) TV takes the consumer experience even further. Ultra HD promises a telepresence-like quality with a screen resolution thats 4x that of standard HD (1080p) today. For consumers, that means the future of digital television has never looked sharper. Of course, getting to this point will not come without a few growing pains. Ultra HD is rewriting the rulebook when it comes to image quality, which requires changes throughout the ecosystem from program production to content distribution technology. Moving the market forward will require action in a number of key areas; including 1. Content Production While an Ultra HD or 4K TV is appealing because of its increased resolution and more life-like viewing experience, those benefits are only attainable if the content displayed on the TV is also in Ultra HD format. In 2012, there were fewer than ten movies filmed in Ultra HD. Last year that number grew</description>
      </item>
      <item>
         <title>Stephen Palm in ECN Magazine: Conscious Technologies Cut Home Network Energy</title>
         <link>https://www.broadcom.com/blog/stephen-palm-in-ecn-magazine-conscious-technologies-cut-home-ne</link>
         <guid>https://www.broadcom.com/blog/stephen-palm-in-ecn-magazine-conscious-technologies-cut-home-ne</guid>
         <pubDate>November 30, 2015</pubDate>
         <description>Editor's Note: Broadcom experts often weigh in on popular topics on industry sites around the Web. Below is a reprint of a story that appeared in ECN Magazine, in which Dr. Stephen Palm, senior technical director Broadband &amp; Connectivity Group, Broadcom, talks about the technologies that are making energy efficient home networking possible. From ECN Magazine: As connected devices multiply in the home, from set-top boxes, smartphones, and computers, to Internet of Things (IoT) gadgets and monitors, theres concern that there will be a corresponding increase in overall energy used by these myriad network interfaces. Instead, these network interfaces are actually the key to reducing total energy in the household by consolidating resources and notifying when resources are needed or can be placed in a lower power state or turned off. The concerns are being addressed by new device interface standards and protocols that are combining with technologies such as the IoT and working together in the background to make usage-aware decisions that minimize energy used while also ensuring the expected or even improved -- user experience. In June of 2014, the Consumer Electronics Association released a study it had commissioned that showed consumer electronics consuming 12% of the average household power budget, with most of that going to TVs (30%), set-top boxes, and PCs, followed by gaming consoles and network equipment, such as routers (Figure 1). [caption id=&quot;attachment_15441&quot; align=&quot;aligncenter&quot; width=&quot;800&quot;] Figure 1. Consumer electronics consumed 12% of the average yearly home power budget in 2013. Streaming OTT video services have the potential to increase that percentage, but new technologies can tightly control the power consumed. (Image courtesy of the Consumer Electronics Association)[/caption] In the two years since then, streaming video has taken off with over-the-top (OTT) streaming services spreading from TVs and STBs, to tablets and smartphones. Juniper Research</description>
      </item>
      <item>
         <title>Broadcom Compression Tech Speeds Adoption and Reach of 4K TV Content</title>
         <link>https://www.broadcom.com/blog/broadcom-compression-tech-speeds-adoption-and-reach-of-4k-tv-co</link>
         <guid>https://www.broadcom.com/blog/broadcom-compression-tech-speeds-adoption-and-reach-of-4k-tv-co</guid>
         <pubDate>June 9, 2015</pubDate>
         <description>Its clear that Ultra HD TV is coming to living rooms around the world. As industry-watchers look ahead, so, too is Broadcom.The company is constantly innovating to anticipate the needs of operators and consumers alike as the market heads mainstream. Not only is Broadcom helping Pay TV operators optimize their existing pipes to deliver 4K YouTube content over their networks, but also to offer support for YouTube and more features to their subscribers who are on the leading-edge of adoption. The Ultra HD picture is coming together as retail prices for the pixel-dense sets continue to fall and content production is on the uptick, including efforts by Samsung, Sony, Netflix, YouTube and satellite operator DirecTV. One of the ways leading cable and broadband providers can prepare is with a Broadcom-backed standard dubbed High Efficiency Video Codec (HEVC), a video compression standard that slashes the required bandwidth of 4K streams so it can be deployed to consumers via a media gateway or set-top box. In early 2013, Broadcom debuted the BCM7445, a flagship Ultra HD system-on-a-chip with encoding and decoding technology that made it possible for operators to deliver four times the amount of pixels with 50 percent bandwidth savings. Today, the company announced its next-generation successor, the BCM7445S , which adds more bandwidth compression and support for Google's VP9 open video compression standard. YouTube is among the frontrunners for offering 4K content, which it has supported since 2010. The BCM7445S offers VP9 decode support for speeds up to 60 frames per second, enabling consumers to stream 4K YouTube content to their set-top boxes and display on an Ultra HD TV. VP9 is the standard that YouTube wants providers to use for 4K content, said Joseph Del Rio, product line director at Broadcom.If you are a provisioner of YouTube content, then</description>
      </item>
      <item>
         <title>More Auto Industry Players Back Ethernet in Cars, Eye Improvements Ahead</title>
         <link>https://www.broadcom.com/blog/more-auto-industry-players-back-ethernet-in-cars-eye-improvement</link>
         <guid>https://www.broadcom.com/blog/more-auto-industry-players-back-ethernet-in-cars-eye-improvement</guid>
         <pubDate>June 25, 2013</pubDate>
         <description>The momentum around the connected car is showing no signs of slowing down, and Ethernet a ubiquitous, inexpensive and robust connectivity standard is increasingly becoming the technology of choice for some of the worlds biggest automakers. A special industry group called the Open-Pair EtherNet (OPEN) Alliance of which Broadcom is a founding member recently announced that Caterpillar Inc., PSA Peugeot, Citron, Toyota Motor Corp., Volkswagen Group and Volvo Cars have joined its ranks.Their arrivals bring the membership lineup to more than 140 big-name automakers and top-tier industry players that manufacture in-car entertainment, navigation, driver assistance and safety systems. Read the OPEN Alliances Press Release (PDF). The swelling support for in-car Ethernet, which the OPEN Alliance is pushing as an industry-wide standard, means more companies are buying into the idea that cars wired up with twisted pair Ethernet cables (based on Broadcom BroadR-Reach technology) have an opportunity to innovate in ways that not only benefit drivers but make a major impact on the bottom line. [caption id=&quot;attachment_6184&quot; align=&quot;alignright&quot; width=&quot;258&quot;] Click to expand the infographic and learn more about Ethernet's role in the connected car.[/caption] One of the first carmakers to offer BroadR-Reach to customers is Germanys BMW, which is expected to roll out the X5 later this year with single-pair, 100 Mbps Ethernet connecting its driver-assistance cameras. On the Broadcom blog, weve been talking up the big reasons why Ethernet is the connectivity technology of choice: It costs less, reduces overall cabling weight and offers zippier data speeds.Ethernet also supports advanced diagnostic features that give mechanics easy access to data collection and system diagnostics for repairs and checkups. The trend shows no signs of slowing: By 2017, nearly 90 percent of new vehicles in the U.S.will be of the connected variety, according to a study from ABI Research. The next big</description>
      </item>
      <item>
         <title>The Connected Car Zooms into China as the Internet of Vehicles</title>
         <link>https://www.broadcom.com/blog/automotive-technology-2/the-connected-car-zooms-into-china-as-the-internet-of-vehicles/</link>
         <guid>https://www.broadcom.com/blog/automotive-technology-2/the-connected-car-zooms-into-china-as-the-internet-of-vehicles/</guid>
         <pubDate>April 21, 2015</pubDate>
         <description>As the worlds most populous nation, Chinas consumer purchasing power has been a boon for the tech industry, most obvious around the adoption of mobile devices.Now, theres a growing interest in the connectivity that China is bringing to the largest mobile device the automobile. The Internet of Vehicles (IoV) is one of the fastest growing segments for the chip industry, said Ali Abaye, senior director of automotive at Broadcom.The impact of this seismic shift in automotive design has particular interest for China. Along with its deep understanding of Chinas market dynamics, Broadcom brings to the table a suite of automotive-grade connectivity technologies including BroadR-Reach Ethernet Switch &amp; PHYs, Wi-Fi/Bluetooth combo chips for smartphone integration, support for the Android app ecosystem and most recently, support for Near Field Communication (NFC). What that means for automakers is a secure, turnkey path to designing and building connected cars with the latest in infotainment, telematics, safety sensors, advanced driver assistance systems, cameras, on-board diagnostics and mobile device integration. This week, Abaye will share Broadcom's vision for the connected car and its potential impact on the growing Internet of Vehicles trend at the China Intelligent &amp; Connected Vehicle Summit in Shanghai. In addition to industry excitement around the event, market researchers projections are telling: By 2020, new cars are expected to have some 1,000 chips per vehicle, according to a January report by Strategy Analytics. That same year, Chinese car buyers are expected to make up about 35 percent of all new car sales.That far surpasses estimates for U.S.car buyers (at 14 percent) and European car buyers (10 percent), McKinsey &amp; Company data showed. Broadcom is poised to make an impact on Chinas Internet of Vehicles in four key areas, according to Abaye: Ethernet Network Security: The global standard of Ethernetfor decades, the world's most</description>
      </item>
      <item>
         <title>ARM's Reach: 50 Billion Chip Milestone [VIDEO]</title>
         <link>https://www.broadcom.com/blog/arms-reach-50-billion-chip-milestone-video</link>
         <guid>https://www.broadcom.com/blog/arms-reach-50-billion-chip-milestone-video</guid>
         <pubDate>March 3, 2014</pubDate>
         <description>Amid all the trend-setting consumer technology being announced at Mobile World Congress last week, processor giant ARM Holdings Plc.quietly benchmarked its own milestone. The company, based in Cambridge, U.K., creates and licenses the most ubiquitous chip architecture in the world.Last week, it announced that 50 billion yes, with a B ARM-powered chips have been shipped by its partners. Heres another impressive stat they like to share: ARM-based chips are found in nearly 60 percent of the worlds mobile devices and if the chips were laid out end-to-end, they would circle the globe a dozen times. To commemorate the 50 billion milestone, ARM reached out to Broadcom and some of its big-name partners in the industry to share their thoughts in a video retrospective.The video features Broadcom President and Chief Executive Scott McGregor and Sophie Wilson, director of IC design at Broadcom and one of the chief architects of the ARM processor architecture. Wilson has a particularly close tie to ARM via her pioneering work at Cambridges Acorn Computers some 35 years ago.While working at Acorn, she and colleague Steve Furber took less than a week to design and implement the prototype of what became the BBC Microcomputer. Furber acknowledged in his book on ARM that the processors development wouldnt have been possible without Wilson, whose original instruction set architecture survives, extended but otherwise largely unscathed, to this day. [cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_left&quot;] Wilson went on to design the Firepath processor and was one of the seven co-founders of Cambridge, U.K.-based DSL company Element 14 Inc., which was acquired by Broadcom in 2000.Last year, she was named a winner in The Economists Innovation Awards.In 2012, she was named a Fellow by Silicon Valleys Computer History Museum. Along with executives from ARM partners Freescale, MediaTek, Global Foundries and ST Micro, in the video</description>
      </item>
      <item>
         <title>Pay-TV in China Reaches New Heights with Broadcom Technology</title>
         <link>https://www.broadcom.com/blog/emerging-markets/pay-tv-in-china-reaches-new-heights-with-broadcom-technology/</link>
         <guid>https://www.broadcom.com/blog/emerging-markets/pay-tv-in-china-reaches-new-heights-with-broadcom-technology/</guid>
         <pubDate>March 20, 2012</pubDate>
         <description>BEIJING - Today, Broadcom announced several key wins and higher speed technology that is changing the pay-TV landscape in China.Cable operators there are in the midst of converging their networks to offer telecom, Internet and TV services together to their subscribers. They are putting the plumbing in - so to speak - so that Chinese consumers can watch high-quality HDTV, enjoy high-speed Internet and even new services like video chat through a TV.

Broadcom engineers put their heads together and developed a platform called DOCSIS-based EoC, which uses the proven and standardized DOCSIS (Data Over Cable Systems Interface Standard) it helped develop many years ago for the U.S. Married to Ethernet over Coax (EoC) technology that has been widely used throughout China, the result is something that specifically addresses Chinas government-mandated Next Generation Broadcasting, or NGB, initiative with a cost-effective and high performance way of implementing high-speed cable networks.

At the China Content Broadcasting Network (CCBN) today, Broadcom announced that its DOCSIS-based EoC cable architecture can support 1Gbps bandwidth access to drive new levels of speed and performance. WASU, the largest cable provider in Hangzhou, revealed that they are deploying Broadcom's technology using cable and network equipment from its partner, Lancable. WASUs President of Research Institute, Zhang CHangli, said that Broadcom's technology is enabling high performance voice, video and data delivery to their subscribers, revolutionizing home networks and giving them the ability to offer new services.

Sumavision, a leading Chinese provider of video delivery solutions, is also deploying Broadcom's DOCSIS-based EoC cable architecture. With top partner and operator support, Broadcom is focused on accelerating network convergence throughout China.

To learn more, visit our CCBN Page.

Previous Coverage: Broadcom at CCBN: The China TV Blitz Begins</description>
      </item>
      <item>
         <title>Broadcom Technology Contributes to CES Excitement</title>
         <link>https://www.broadcom.com/blog/ces/broadcom-technology-contributes-to-ces-excitement/</link>
         <guid>https://www.broadcom.com/blog/ces/broadcom-technology-contributes-to-ces-excitement/</guid>
         <pubDate>January 13, 2012</pubDate>
         <description>LAS VEGAS - The last couple of days at the Consumer Electronics Show have been non-stop.This is certainly not a place for those who &quot;need their space.&quot; The exhibit booths are overflowing as tablets, smartphones, TVs and even some celebrities pull in big crowds. There's definitely some excitement in the air as many categories - not just one or two - are gaining some exposure and attention.At the Broadcom booth, folks are stopping in to learn more about our 5G WiFi chips and the BroadR-Reach in-vehicle Ethernet technology, among other developments. The Blog Squad has been busy as well, talking with the folks behind our technologies, learning more about the potential uses of things like Bluetooth and Ethernet and how Broadcom is paving the way for more advanced features down the road. Case in point: Be sure to catch Blog Squadder Prashant Mantha's interview with Broadcom's Ron Wong, associate product line director for Bluetooth in the Mobile &amp; Wireless Group. In a video interview, Wong demonstrates how Near Field Communications technology - coupled with a Wi-Fi-connected display and a Bluetooth-enabled remote, headset and gaming controller brings easy pairing to the home entertainment system.We've been hearing about NFC technology in smartphones lately but there hasn't been a lot of chatter about NFC in peripherals.That's starting to change, and Broadcom is driving it. [caption id=&quot;attachment_552&quot; align=&quot;alignright&quot; width=&quot;300&quot;] Hands on with tablets and smartphones at the Broadcom booth.Photo by Eric Lin.[/caption] We've also dug in on 5G WiFi, as Blog Squadder Eric Lin breaks down some specifics about the technology. There's still so much more to talk about.The Blog Squad will be breaking down the power of &quot;Connectivity,&quot; a big theme at this year's CES, and looking at the arrival of the Android ecosystem on the living room TV. Be sure to be</description>
      </item>
      <item>
         <title>Humax Deploys Industrys First HD Satellite Boxes Designed in 40 nm, Powered by Broadcom</title>
         <link>https://www.broadcom.com/blog/home-entertainment/humax-deploys-industrys-first-hd-satellite-boxes-designed-in-40-nm-powered-by-broadcom/</link>
         <guid>https://www.broadcom.com/blog/home-entertainment/humax-deploys-industrys-first-hd-satellite-boxes-designed-in-40-nm-powered-by-broadcom/</guid>
         <pubDate>February 22, 2012</pubDate>
         <description>[caption id=&quot;attachment_1271&quot; align=&quot;alignleft&quot; width=&quot;300&quot;] Broadcom's Rich Nelson, Senior VP of Marketing in the Broadband Communications Group, awards Humax's CTO for the industrys first 40 nm set-top box deployment.[/caption]

As the adoption of digital television continues to grow around the world, manufacturers are turning to Broadcom for technology that will drive a low-cost, high-quality interactive high-def experience to emerging markets.

Humax, a premier Korean set-top box (STB) maker, and Broadcom announced today the deployment of the industry's first 40-nanometer high definition satellite STBs.

Silicon designed in the 40 nm CMOS process enables lower power, higher performance and significant design efficiencies beyond 65 nm.Broadcom's technology - the BCM7358 HD satellite STB SoC - features a high performance CPU, advanced security and connectivity support, including Digital Living Network Alliance, or DLNA.Its high integration reduces design complexity, size and overall system cost to accelerate deployment of HD satellite STBs.

Digital TV is growing rapidly throughout the world, driven by consumer demand.Pay-TV subscribers in the Brazil, Russia, India and China markets are projected to reach half a billion by 2016 with approximately 75 percent on digital platforms, according to ABI Research.

Sam Rosen, a senior analyst with ABI, said:
...lower cost HD STBs will help Asian satellite operators to proliferate HD video and other advanced services to consumers that previously could not afford them.Emerging markets require cost-optimized solutions supporting only a single television, options for integrated or external security, robust 2D user interfaces, and HD video.
Today's announcement marks a significant milestone in the accelerated development and quick deployment of high performance 40 nm set-top boxes worldwide.

Learn more about Broadcom's cable set-top box solutions.</description>
      </item>
      <item>
         <title>IBC 2013: Laying the Groundwork for Ultra HD Adoption</title>
         <link>https://www.broadcom.com/blog/ibc-2013-laying-the-groundwork-for-ultra-hd-adoption</link>
         <guid>https://www.broadcom.com/blog/ibc-2013-laying-the-groundwork-for-ultra-hd-adoption</guid>
         <pubDate>September 11, 2013</pubDate>
         <description>Ultra HD, the next big thing in television technology, may be on the fast-track to living rooms but before it can get there, it must first get past some potholes and detours along the way. On the upside for early-adopter consumers, pricing and availability wont be big deterrents.The Consumer Electronics Association has forecast shipments of more than one million Ultra HD sets by 2015, driven in large part by already plummeting prices and increased consumer demand. [caption id=&quot;attachment_10178&quot; align=&quot;alignleft&quot; width=&quot;274&quot;] Click infographic to download and share: Learn more about Ultra HD television and Broadcom's role in helping boost adoption of the new display technology.[/caption] But before Ultra HD can reach the mainstream market, some technology challenges will have to be addressed and thats where Broadcom comes in. At the International Broadcasting Convention in Amsterdam this week, Broadcom is putting the spotlight on its suite of IP, TV, cable and satellite systems-on-a-chip that will meet the technical challenges that broadcasters as they attempt deliver Ultra HD content.One of the biggest issues likely to be discussed at this show is how to most efficiently produce and deliver Ultra HD content over networks and hardware that support the latest standards and video compression techniques. 2013 is the year of education and infrastructure development to enable Ultra HD, said Joe Del Rio, Associate Product Line Director in the Broadband Communications Group at Broadcom.Its really important to content creators and to the industry to create an economically viable path to those customers. As it stands, every participant in the production and delivery of Ultra HD content will need to retool for a standard called HEVC (High Efficiency Video Coding, or H.265) so that bandwidth-hogging content doesnt bog down broadcast networks. Related: Broadcom and Rovi Team Up to Slim Down Ultra HDs Big Bandwidth Even though</description>
      </item>
      <item>
         <title>International Car Electronics Show? CES 2015 Rolls on with Connected Cars in the Fast Lane</title>
         <link>https://www.broadcom.com/blog/international-car-electronics-show-ces-2015-rolls-on-with-conne</link>
         <guid>https://www.broadcom.com/blog/international-car-electronics-show-ces-2015-rolls-on-with-conne</guid>
         <pubDate>January 5, 2015</pubDate>
         <description>Take a glance around the cavernous halls of the Las Vegas Convention Center, as well as the breezeways outside, and you might mistake the 2015 International Consumer Show for an exhibition for the auto industry. The Consumer Electronics Association, which puts on the yearly tech-fest, said it expects a record 10 automakers to exhibit at the show and will cover more than 165,000 square feet of space, up 17 percent from a year ago. Add to that the high-profile appearances by Dr.Dieter Zetsche, Head of Mercedes-Benz Cars, and Mark Fields, CEO and President of Ford Motor Co., on CES stages and its clear that one of the biggest trends at this years show is the Connected Car.The CEAs research shows that some 30 percent of U.S.households currently own a vehicle with a communications, safety or entertainment system and that figure is only headed upward. Connectivity features are becoming more mainstream, theyre not just for luxury class cars anymore, said Richard Barrett, Broadcom director of wireless connectivity.What well see is the continuing theme of integration of consumer-level technology into the vehicle, including wireless hot spots, bigger displays and an app ecosystem for the car. Broadcom's BroadR-Reach technology plays right into this trend.The company today announced that it added to its lineup of automotive Ethernet connectivity solutions capable of supporting 100 Mbps transmission over a single pair unshielded twisted cable a next-generation chip that enables automakers to build smarter, more secure networked cars. The BCM89811, companys new low-power physical layer transceiver (PHY) is targeted to serve as the in-vehicle network connectivity backbone, supporting the high bandwidth required for advanced safety and infotainment applications and the features needed to deter malicious attacks on the connected car. Power Consumption A big standout feature of Broadcom's new Ethernet offering: Higher levels of chip integration and</description>
      </item>
      <item>
         <title>C-DOCSIS Greenlighted, Ushers Next-Gen Broadband to China</title>
         <link>https://www.broadcom.com/blog/emerging-markets/c-docsis-greenlighted-ushers-next-gen-broadband-to-china/</link>
         <guid>https://www.broadcom.com/blog/emerging-markets/c-docsis-greenlighted-ushers-next-gen-broadband-to-china/</guid>
         <pubDate>November 1, 2012</pubDate>
         <description>As the cable TV and broadband experience in China goes through an upgrade, government officials are looking to standardize the underlying architecture so that operators not only can offer a reliable, high quality TV and Internet experience but also accelerate deployment of services for cable devices. This initiative, known as Next Generation Broadband, or NGB, is driving the convergence of networks in China to accelerate so-called triple play services of voice, video and data that are bundled for consumers. Broadcom is taking the same DOCSIS cable standard that is behind all cable networks in the United States and applying it to Chinese networks in a standard called C-DOCSIS.The standard, which was recently certified by the State Administration of Radio, Film and Television, or SARFT, promises to bring interoperability and quality of service to cable TV and broadband.The certification also allows cable operators to accelerate deployments throughout the country. Today at the ICTC show, a leading cable conference being held in Hangzhou, Broadcom is showcasing the technology that features C-DOCSIS, called DOCSIS-based Ethernet over Coax (EoC). It is a complete chipset and software cable architecture solution that includes Coax Media Converter (CMC), DOCSIS 2.0 and 3.0 cable modem, and set-top box (STB) system-on-a-chip (SoC) solutions. Broadcom's solution leverages its latest family of 10G EPON Optical Line Terminal (OLT) products in addition to Broadcom's family of GPON chipsets as well as employing DOCSIS technology as part of a customized final 100 meter solution. Read the press release. With more than 1.8 million cable TV subscribers, Wasu, the largest cable operator in Hangzhou, is already upgrading networks and bringing new, advanced triple play services to its subscribers. An innovative architecture that not only powers the technologies of today but also sets the stage for the advanced services of tomorrow, C-DOCSIS can change consumer</description>
      </item>
      <item>
         <title>Ballmer Farewell Keynote Showcases Kinect Connectivity</title>
         <link>https://www.broadcom.com/blog/ces/ballmers-farewell-keynote-showcases-kinects-connectivity-potential/</link>
         <guid>https://www.broadcom.com/blog/ces/ballmers-farewell-keynote-showcases-kinects-connectivity-potential/</guid>
         <pubDate>January 10, 2012</pubDate>
         <description>LAS VEGAS - No one really expected any real news from Microsoft during CEO Steve Ballmer's farewell speech Monday night at CES.After all, one of the reasons that the company pulled out of the Consumer Electronics Show was because its product launches didn't always coincide with the show's January date. Without news and a couple of hours to fill, it opted instead to change things up and have some fun.The company brought Ryan Seacrest on-stage with Ballmer to emcee a scripted Q&amp;A about mobile phones, Windows and Kinect.And then there was that pause for a performance by the Tweet Choir.(Yes, a real choir that sang on-screen tweets about the keynote.) [caption id=&quot;attachment_533&quot; align=&quot;alignright&quot; width=&quot;300&quot;] Source: Consumer Electronics Association[/caption] Fun and games aside, Microsoft simply used the CES stage for one last time to showcase what it's offering today and what it's working on for tomorrow, without any real drumroll-worthy announcement.And while some of the messaging and demos were a bit long-winded, Ballmer and team left no doubt about how the company is committed to user experiences.They want to connect users with their devices, their online friends and their photos and videos, among other things. Specifically, the company highlighted its work around Kinect, which uses sensor technology for gaming today.As Ballmer noted, the ability for the computer to see the user, hear the user and interact with the user provides new opportunities for other industries, such as health care or education. At the end of the day, the technology centers around the big screen, the living room screen.That's something that Microsoft - and many others at the show with offerings of their own - are showcasing at CES as they highlight the connectivity between TVs, gaming consoles, set-top boxes and mobile devices.At the Dish Network press conference yesterday, it was a</description>
      </item>
      <item>
         <title>Consumers Want It, Carmakers Deliver it at CES: Better Automotive Connectivity Through Ethernet</title>
         <link>https://www.broadcom.com/blog/consumers-want-it-carmakers-deliver-it-at-ces-better-automotive</link>
         <guid>https://www.broadcom.com/blog/consumers-want-it-carmakers-deliver-it-at-ces-better-automotive</guid>
         <pubDate>January 3, 2013</pubDate>
         <description>Our smartphones follow us everywhere the office, the kitchen, the couch, the bedroom and keep us connected to the things that matter most.The idea of a (gasp!) dead zone with no connectivity is almost unthinkable.The most connected among us find ourselves getting &quot;tech withdrawals&quot; when we're off the grid. If the pace of consumer electronics improvements has taught us anything in the last decade, its that it doesnt have to be this way even in our cars.Were headed toward staying connected all the time while on the road.To achieve that, automakers and other players in the car ecosystem are working to bring the most efficient, reliable and speedy connectivity technologies to more of todays drivers. Right now, one of the most promising technologies in the automotive space is around Ethernet but not the blue-wire flavor youre already familiar with.This new Ethernet standard has the potential to shuttle all types of data through a cars different systems to deliver safety and infotainment features to the masses.Theres eyes free control of your texts and emails, smart sensing of road hazards, live updates about open parking spaces and even real-time uploading to the cloud of critical information about location, traffic, fuel consumption and speed for analysis and interpretation. Those visions for the Connected Car will come to life at this years International Consumer Electronics Show, where more than 100,000 square feet of convention space will be filled with exhibits from 110 different automotive technology companies.If CES represents a first look at technologies that will make it to market a few years, then carmakers are putting their research and development dollars in the right place.By 2017, nearly 90 percent of new vehicles in the U.S.will be of the connected variety, according to a study from ABI Research. The plethora of new in-car technologies and</description>
      </item>
      <item>
         <title>DLNAs CES Mission: Premium Content on Any Device in Your Home</title>
         <link>https://www.broadcom.com/blog/dlnas-ces-mission-premium-content-on-any-device-in-your-home</link>
         <guid>https://www.broadcom.com/blog/dlnas-ces-mission-premium-content-on-any-device-in-your-home</guid>
         <pubDate>January 9, 2013</pubDate>
         <description>Connecting everything is Broadcom's tagline, but were not the only ones living up to its ideals.The DLNA (the Digital Living Network Alliance) is on the floor this week at the International Consumer Electronics Show demonstrating that it, too, is into connecting everything. [caption id=&quot;attachment_6678&quot; align=&quot;alignleft&quot; width=&quot;224&quot;] Spotted at CES 2013: DLNA's mission.[/caption] This year at CES, the DLNA group is joining forces with other connected-home organizations (MoCA, Wi-Fi Alliance and HomePlug) to show off all the ways it can give consumers their media content. Its pretty much what they want, and where they want it, according to Shane Buchanan, DLNA Certification Administrator. DLNA talked up its premium content guidelines, a series of standards for the playback of high-quality, premium commercial video and music offered to pay TV subscribers.DLNA works with cable, satellite and telecom service providers to protect the good stuff and provides link protection on each end of the data transfer.The extra layer of security allows broadcast operators to feel good about enabling consumers to share their content on multimedia devices without the risk of piracy. DNLAs big talking point is that its technology standards are agnostic: It doesnt matter what box or screen on which the content is played.Whether its streaming through a set-top box, a game console, a media server or a gateway and playing back on a TV, tablet, laptop or even a smartphone, DLNA seamlessly connects all of these devices so that you can digitally share multimedia content. Its technology acts as a behind-the-scenes traffic cop, according to Broadcom's Brian Wheeler, senior product line manager for cable modems in the Broadband Communications Group.It ensures high playback quality and interoperability between devices. [caption id=&quot;attachment_6679&quot; align=&quot;alignright&quot; width=&quot;300&quot;] DLNA's Shane Buchanan demos how music files can be moved around various devices in a home, via a tablet.[/caption] Most</description>
      </item>
      <item>
         <title>Greg Fischer in Wireless Week: &quot;Small Cells Play a Vital Role in Giving Operators and Their Subscribers Seamless Data Services&quot;</title>
         <link>https://www.broadcom.com/blog/greg-fischer-in-wireless-week-small-cells-play-a-vital-role-in-</link>
         <guid>https://www.broadcom.com/blog/greg-fischer-in-wireless-week-small-cells-play-a-vital-role-in-</guid>
         <pubDate>October 14, 2015</pubDate>
         <description>Editor's Note: Broadcom experts often weigh in on popular topics on industry sites around the Web. Below is a reprint of a story that appeared in Wireless Week, in which Greg Fischer, senior vice president &amp; general manager, Broadband &amp; Connectivity at Broadcom, talks about how small cells solve connectivity problems for both carriers and enterprises. From Wireless Week: Driven by economic efficiencies and ever-increasing data requirements, carriers are moving from trials to field deployments of heterogeneous networks, making this a big year for small cells. Small cells are roughly defined as anything smaller than a macrocell including microcells, picocells and femtocells and have a range of anywhere from 10 meters to 3 kilometers, depending on the environment. Their main purpose is to extend range, expand throughput and maintain quality of service for subscribers by effectively re-using spectrum to lessen the load on macrocells at times or in places of peak data or voice activity. [caption id=&quot;attachment_15354&quot; align=&quot;aligncenter&quot; width=&quot;800&quot;] Figure 1. A seamless transition between licensed and unlicensed cells in a heterogeneous network, where the emphasis is on an improved user experience, regardless of medium, is the calling card of small cells.[/caption] A seamless transition between licensed and unlicensed cells in a heterogeneous network, where the emphasis is on an improved user experience, regardless of medium, is the calling card of small cells. Over time, small cells have become a more attractive solution for carriers as low-cost, smaller form factor silicon has made deployment easier. In addition, network and data-load partitioning, billing, protocols, and usage models have been more clearly defined. The rapid evolution of small-cell technology has dovetailed with exponentially increasing user data requirements, from the home to public spaces to the enterprise. Combine the rapid rise in over-the-top (OTT) content streaming and the rapidly emerging Internet of Things</description>
      </item>
      <item>
         <title>It’s your flash/cache solution, we just made it BETTER with new ExpressLane &amp; CrossLink features on LightPulse Gen 5 FC HBAs</title>
         <link>https://www.broadcom.com/company/blog/headache-troubleshooting-faulty-optics-cables</link>
         <guid>https://www.broadcom.com/company/blog/headache-troubleshooting-faulty-optics-cables</guid>
         <pubDate>November 19, 2013</pubDate>
         <description>If you’ve deployed a flash/cache solution in your environment, it’s pretty safe to say that you bought it because you wanted it to be fast, right? And you probably need the best reliability available because you purchased the system to support your most important, mission-critical applications that are likely to be running in a virtualized environment. With those objectives in mind, Emulex has developed two new standard features for its LightPulse® Gen 5 Fibre Channel (FC) Host Bus Adapters (HBAs) that help you do just that—deliver the best quality of service (QoS), performance, reliability and ultimately return on investment (ROI) on you flash/cache systems purchase. Today, we introduced new ExpressLane™ and CrossLink™ no-charge features available only on Emulex LightPulse Gen 5 FC HBAs and Converged Fabric Adapters (CFAs). These new features are standards-based; built on the insanely reliable, high-performance FC protocol with no vendor lock-in; and don’t require any additional hardware/software purchases to leverage. So if you are going to be choosing new HBAs and CFAs, why not choose the only adapters that enable you to get the most out of your flash/cache systems supporting virtualized mission-critical apps. Both CrossLink and ExpressLane are managed via the Emulex OneCommand® Manager application. Here’s an overview of how the new features work: Emulex ExpressLane As flash storage is deployed into mixed storage environments or with hybrid storage arrays, the combination of data from rotating media and flash devices can cause congestion on the Storage Area Network (SAN), resulting in reduced performance of the expensive flash storage devices and diminished ROI on the flash investment. Emulex ExpressLane gives high priority, mission-critical workloads more chances to transmit by tagging the associated Logical Unit Number (LUN), so that flash traffic receives precedence. Emulex ExpressLane technology offers: Improved Service Level Agreements (SLAs): ExpressLane technology delivers prioritized queuing</description>
      </item>
      <item>
         <title>Emulex 16GFC HBAs deliver up to 10 times better reliability to keep systems up and running</title>
         <link>https://www.broadcom.com/company/blog/16gfc-hbas-deliver-10-x-better-reliability</link>
         <guid>https://www.broadcom.com/company/blog/16gfc-hbas-deliver-10-x-better-reliability</guid>
         <pubDate>May 1, 2013</pubDate>
         <description>
	A detailed reliability study by Emulex Labs shows that based on component selection, the Emulex 16Gb Fibre Channel (16GFC) Host Bus Adapter (HBA) can deliver up to 10x better reliability than QLogic’s newly released QLE2600 series. The LPe16000B was designed with reliability in mind, with a cool running ASIC and fail-proof passive heat sink for heat management within the server.

	 

	Emulex leads in reliability with the highest published mean time between failure (MTBF) in the HBA industry—10 million hours MTBF on the LightPulse family of 2G, 4G, 8G and 16GFC HBAs. For more information on why OEMs have deployed more Emulex LPe16000-series HBAs than any other 16GFC HBA, click here.
</description>
      </item>
      <item>
         <title>Broadcom Zooms into Austin for SXSW with Connected Car Vision</title>
         <link>https://www.broadcom.com/blog/automotive-technology-2/broadcom-zooms-into-austin-for-sxsw-with-connected-car-vision/</link>
         <guid>https://www.broadcom.com/blog/automotive-technology-2/broadcom-zooms-into-austin-for-sxsw-with-connected-car-vision/</guid>
         <pubDate>March 7, 2014</pubDate>
         <description>In the past decade, cars havent changed much on the technology front.They get us from Point A to Point B safely, with a bit of intelligence and customization thrown into the dash. But thats about to change.More cars are getting connected: to their own internal networks, to the Internet, to your favorite devices, and later to each other. In recent years, Broadcom has been developing technologies that will further enhance and alter the driving experience both under the hood, as well as in the interior cabin. This weekend, at the annual South by Southwest (SXSW) Interactive festival in Austin, Texas, one of Broadcom's own will take the stage to talk about the evolution of the connected car and offer insight as to whats on the horizon for automotive connectivity. On Sunday, Ali Abaye, Broadcom's senior director of Automotive, in the Infrastructure &amp; Networking Group, will deliver a presentation on Network Convergence Accelerates Toward Automobiles. The SXSW conference is known for being a venue where conversations about ground-breaking technologies cross into the mainstream.Its where Twitter and Foursquare, for example, made their debuts.Its also an ideal place to talk the technologies that will continue to expand the connected car experience, Abaye said. SXSWs unique cross-disciplinary approach makes it a perfect venue to share insights on connected cars, he said.SXSW attracts the best and brightest in a wide variety of innovative fields, which is exactly what is needed to help connected cars achieve their full potential. The Connected Car Evolves Car buyers are increasingly interested in how technology plays a huge role in their car-buying decisions, especially for younger drivers. Analysts predict that by 2025, almost all new cars will be connected.But the concept of connectivity today is different from what it will look like 10 years down the road.Carmakers are looking beyond</description>
      </item>
      <item>
         <title>Broadcom at CES: The Technology is Everywhere</title>
         <link>https://www.broadcom.com/blog/ces/broadcom-at-ces-the-technology-is-everywhere/</link>
         <guid>https://www.broadcom.com/blog/ces/broadcom-at-ces-the-technology-is-everywhere/</guid>
         <pubDate>January 12, 2012</pubDate>
         <description>[caption id=&quot;attachment_559&quot; align=&quot;alignright&quot; width=&quot;300&quot;] Getting down to business at the Broadcom booth.Photo by Willy Wong.[/caption] It's hard to believe that the 2012 International Consumer Electronics Show is winding down. Over at the Broadcom booth, it seems like the finishing touches were, well, just finished.Display screens were meticulously mounted, network switches were double- and triple-checked running and the techies have showcased the best they have to offer. Broadcom went bigger at the show than in years past.Its booth was more than 50% bigger this year at a whopping 7,600 square feet of space for customer meetings and product demos. [caption id=&quot;attachment_556&quot; align=&quot;alignleft&quot; width=&quot;300&quot;] Lounge at the Broadcom booth.Photo by Willy Wong.[/caption] You see, unlike other CES exhibitors, Broadcom is less focused on how the products look and more on how the products work. Remember that Dish Network press conference, the one with the live kangaroos as a way to promote a new product line? It's a powerful Broadcom chip inside that's allowing Dish to offer a new user experience with the devices.It's the Broadcom chip that's helping to redefine what it means to channel surf. Over the past few days, thousands of people strolled through the Broadcom exhibit to get a sense of how the company is powering a range of high-tech devices that directly impact the consumer experience at home, at work and on the go. TV set-top boxes? Broadcom is there.Smartphones and tablets? You'll find Broadcom inside.Automotive information, safety and entertainment technology? Yup, more Broadcom, Broadcom and Broadcom.Wi-Fi displays, Smart TVs, Bluetooth connectivity, home networking equipment? You guessed it.Broadcom is everywhere. With so much excitement around each of those technology categories, it's safe to guess that pretty much everyone in attendance at this year's Consumer Electronics Show witnessed - at one point or another - some amazing technology that's</description>
      </item>
      <item>
         <title>The Dessert is On Us: Upgrade Your Fibre Channel to Gen 5 (16GFC)</title>
         <link>https://www.broadcom.com/blog/the-dessert-is-on-us-upgrade-your-fibre-channel-to-gen-5-16gfc</link>
         <guid>https://www.broadcom.com/blog/the-dessert-is-on-us-upgrade-your-fibre-channel-to-gen-5-16gfc</guid>
         <pubDate>October 27, 2015</pubDate>
         <description>I like to eat at a particular restaurant in Silicon Valley and I have been a regular here for few years now. I have seen it grow from a hole-in-the-wall to 50-seater hosting C-level executives. It has always delivered. The fantastic repeat experience one would expect from a trusted restaurant – very tasty, on-time service and value for my money. As I enter the restaurant on my latest visit, I notice it has another upgrade. In a typical fashion, I enjoy my meal and as a perk for being a regular – the dessert’s on the house. I relish the dessert and reflect on my day. It has been a productive day in the valley: customer meetings (with enterprises looking for scale up and scale out storage), meeting start-ups (discussing how they can leverage Broadcom technologies in their product plans), partner meetings (to build out an ecosystem where end customers benefit) and engaging in conversations with sales and engineering teams. Fibre Channel has had a similar run – it’s the trusted, purpose built, secure and enterprise proven storage technology. It has never disappointed and consistently provided the right value for money – to CIOs, Solution Architects, Systems Engineers and IT administrators across financial/banking/insurance, healthcare, retail, transportation and several other sectors. The latest Emulex LightPulse LPe 16000 series Gen 5 FC Host Bus Adapters (HBAs) have been deployed into mission critical SAN environments across the world, supporting few hundred thousand 16GFC ports with backwards compatibility to 8GFC and 4GFC speeds. Now as a dessert (tiramisu, pie, strawberry cake, soufflé – eat one, eat all) – Emulex has partnered with Brocade to provide a rich set of additional features to the end-users at no additional charge. The feature set benefit the end-user to simplify deployment and management and increase system uptime. Tim</description>
      </item>
      <item>
         <title>Gen 6 Performance Unleashes The All-flash Data Center</title>
         <link>https://www.broadcom.com/company/blog/gen-6-performance-unleashes-the-all-flash-data-center</link>
         <guid>https://www.broadcom.com/company/blog/gen-6-performance-unleashes-the-all-flash-data-center</guid>
         <pubDate>July 19, 2016</pubDate>
         <description>The Gen 6 Fibre Channel eco-system is rapidly developing with the latest news coming from Brocade on its rollout of the new X6 Director family with Gen 6 technology. The X6 Director family will power the heart of the data center with 364 ports that can scale up with port blades, delivering a maximum port speed of up to 32 Gbps. The X6 Directors joins a number of already released Gen 6 Fibre Channel products including the Brocade 620 switch, LPe31000/32000-series Host Bus Adapters (HBAs) from Emulex, Dell and Lenovo, with more OEMs expected to release their Gen 6 products soon. In just over four months, the Gen 6 Fibre Channel eco-system is nearly complete, making it one of the most rapid Fibre Channel transitions ever seen. What’s driving such a rapid roll-out? Today’s applications can easily consume all of the performance that they get from Flash. To keep up, leading enterprise customers are deploying proven data center- class Gen 6 Fibre Channel networks to support these new requirements in performance with respect to latency, IOPs and bandwidth. The next big technology shift will be NVM Express that delivers an even faster storage solution for all-Flash arrays. These solutions need a network that can keep pace with their throughput and IOPS demands to ensure the network does not become the bottleneck. At the same time, they need to be guaranteed extreme reliability, interoperability and security. For these reasons Fibre Channel is the primary connectivity choice for all-Flash array vendors. According to Brocade, close to 80 pecent of Flash arrays are connected to Fibre Channel networks. Demartek labs took a look at performance bottlenecks that occur when using all-Flash arrays by testing an Oracle database data warehousing environment, connected to an all-Flash storage array using a Brocade Gen 6 switch and</description>
      </item>
      <item>
         <title>Broadcom Enables TV, Video and Internet Anywhere, on Any Screen</title>
         <link>https://www.broadcom.com/blog/home-entertainment/broadcom-enables-tv-video-and-internet-everywhere-on-any-screen/</link>
         <guid>https://www.broadcom.com/blog/home-entertainment/broadcom-enables-tv-video-and-internet-everywhere-on-any-screen/</guid>
         <pubDate>January 6, 2012</pubDate>
         <description>Youve waited a long time for your team to make it to the Big Game and now its your turn to host a Super Bowl party like no other.In the old days, this scene would entail your family and friends enacting an amateur version of a pre-game huddle in the living room around the one big screen TV in the house.

Nowadays you most likely have multiple screens under one roof.You might have a PC and monitor, a laptop, a tablet, a smart phone and at least a couple of TVs.In the past, if you wanted to watch every minute of the game, your options were limited to only one of these devices, the TV.

Wouldnt it be a dream to catch every play on any screen in the house?

Multi-screen home entertainment is becoming a reality with the benefit of technologies and standards such as DLNA, Wi-Fi, MoCA, HomePlug and Transcoding, which enable broadcast content to be sized for wireless devices.

If youre like most people, watching Super Bowl ads is part of the experience, but for that rare person who wants to flip channels during game, Broadcom also offers FastRTV channel change technology that lets TV viewers scan through channels up to five times faster, offering near instant channel-change response from the remote.

With this in mind, broadcast operators are teaming up with connectivity experts like Broadcom to ensure that your guests are free to roam without missing a minute of the action.

From the kitchen, to the den, to the deckor maybe even the bathroomyoure now able to stream broadcast content to tablets, laptops  and smart phones for a true TV, Video and Internet Everywhere experience.

 </description>
      </item>
      <item>
         <title>Technology Leap: 5GWiFi Helps DISH's New Wireless Joey Cut the Set-Top Box Cord</title>
         <link>https://www.broadcom.com/blog/technology-leap-5gwifi-helps-dish-networks-new-wireless-joey-cu</link>
         <guid>https://www.broadcom.com/blog/technology-leap-5gwifi-helps-dish-networks-new-wireless-joey-cu</guid>
         <pubDate>July 11, 2014</pubDate>
         <description>Satellite TV provider DISH Network recently upped the ante on the cord-cutting craze with its Wireless Joey system, the first 802.11ac Wi-Fi set-top box to leap beyond the living room. The wires that the Joey ditches are being replaced with 5G WiFi Broadcom's brand name for the 802.11ac Wi-Fi standard thats recognized as faster, longer-range and more robust than previous generations. TV set-top boxes are part of a growing collection of new devices including home networking routers, tablets and smartphones getting outfitted with 802.11ac Wi-Fi. [caption id=&quot;attachment_12795&quot; align=&quot;alignright&quot; width=&quot;300&quot;] DISH's Hopper DVR system with Wireless Joeys.[/caption] Its benefits are especially clear for delivering satellite TV content via DISHs Joey devices.The 802.11ac Wi-Fi signal supports higher bit-rate video streams and more set-top boxes concurrently than other 802.11n Wi-Fi set-top boxes on the market. The heftier throughput is thanks in large part to Broadcom's BCM4360 chipset, which features 3x3 multiple input-multiple output (MIMO) antenna technology. The Broadcom chip with 3x3 MIMO deploys multiple antennas at both the transmitter and receiver ends to improve the signal, delivering significant increases in data throughput without the need for additional bandwidth or increased transmission power. The beauty of 5G WiFi and the Broadcom chipset is in the delivery of live TV.Current technology can send a high-quality signal from services that deliver previously-recorded video content over the Internet, via apps such as Hulu and Netflix. The technical requirements for delivering a live TV signal are much more complicated because they involve real-time speeds that need to go to the end points with little tolerance for whats known in the industry as packet drop.To the consumer, that means an unacceptable delay in the feed or other glitch that would immediately show up in the video signal. It's the difference between watching a live soccer match, a baseball game,</description>
      </item>
      <item>
         <title>Motor City Sees Luxury-Class Tech Go Mainstream</title>
         <link>https://www.broadcom.com/blog/automotive-technology-2/motor-city-sees-luxury-class-tech-go-mainstream/</link>
         <guid>https://www.broadcom.com/blog/automotive-technology-2/motor-city-sees-luxury-class-tech-go-mainstream/</guid>
         <pubDate>October 15, 2012</pubDate>
         <description>The car no longer just a way to get around has emerged as the next frontier for connectivity. For many consumers, its a secondary space thats doing double duty as a place where work, entertainment and safety technologies come together. In the past, high-end features including infotainment systems, driver assistance sensors and cameras and telematics a fancy way of describing the technology that integrates global positioning and navigation were reserved for luxury-class vehicles. But thats changing, according to market researcher IHS iSuppli, as the cost of sensors, systems and the wires that connect them continue to fall.Entry-level cars, such as those made by Broadcom partner Hyundai, are set to be outfitted with the latest connected technologies, such as GPS, surround view parking, lane departure warning systems and backseat displays. This week at SAE Convergence in Detroit, Broadcom will be talking about the connected car of the future alongside its partners, and other major automakers, suppliers and technology providers.Well be exhibiting wired and wireless technologies that are set to bring next-generation technologies like infotainment, telematics and Advanced Driver Assistance (ADAS) to entry-level car buyers. All of these features are made possible with in-car Ethernet, a cost-effective technology thats being championed by Broadcom and its auto manufacturer partners.Ethernet has the potential to redefine in-car networking, because its lightweight, scalable and supports the quick deployment of new applications.Already, a virtual whos-who of automakers, including Ford, BMW, General Motors and now Hyundai, have embraced Broadcom's BroadR-Reach Ethernet technology. And the ecosystem continues to grow.Today, the OPEN Alliance Special Industry Group announced its expansion with leading global automakers joining the alliance to champion the benefits of Ethernet for automotive.As a founding member of the OPEN Alliance, Broadcom has witnessed a 13X growth in membership since the groups inception a year ago.New members include Daimler, Ford,</description>
      </item>
      <item>
         <title>Ethernet in Cars Lowers Cost of Life-Saving Backup Camera Tech</title>
         <link>https://www.broadcom.com/blog/automotive-technology-2/ethernet-in-cars-lowers-cost-of-life-saving-backup-camera-tech/</link>
         <guid>https://www.broadcom.com/blog/automotive-technology-2/ethernet-in-cars-lowers-cost-of-life-saving-backup-camera-tech/</guid>
         <pubDate>April 23, 2013</pubDate>
         <description>The rear camera is the latest automotive safety feature to pick up some interest from parents and carmakers to news outlets and lawmakers. The cameras are being positioned as a feature that will save the lives of small children.And a government mandate that was supposed to make these cameras standard has been tied up in political gridlock, which has news outlets from the Los Angeles Times to USA Today to CNN turning to parents who tell the stories of children who have been killed or injured in these horrific accidents. The numbers are heartbreaking the U.S.Department of Transportation reports more than 200 people are killed and 17,000 injured every year in backover crashes, with children younger than 5 accounting for 44 percent of the fatalities, according to a CNN report. While the pressure is on Washington to take action, the technology has been evolving, as has the adoption.While the bureaucrats fight their fights, Broadcom has been quietly working behind the scenes with car manufacturers such as BMW and Hyundai to reduce the cost of their in-car connectivity and, therefore, the cost of installing backup cameras through Ethernet technology called BroadR-Reach. [caption id=&quot;attachment_6184&quot; align=&quot;alignright&quot; width=&quot;295&quot;] Click to enlarge the infographic to learn more about Broadcom's connected car tech.[/caption] BroadR-Reach uses lightweight, twisted pair cables coupled with superior chipsets that help speed the flow of data traffic through an in-car network to enable connectivity in autos that costs less than other types. Ethernet makes its case in a number of ways: It reduces overall weight (which helps with improved gas mileage), its less expensive to install and provides a unified communications platform for other in-car sensors and systems a boon to automakers. One of the biggest advantages to BroadR-Reach technology is that it will bring advanced safety features like backup cameras to</description>
      </item>
      <item>
         <title>Broadcom Puts the Connected Car Front and Center at SXSW</title>
         <link>https://www.broadcom.com/blog/automotive-technology-2/broadcom-puts-the-connected-car-front-and-center-at-sxsw/</link>
         <guid>https://www.broadcom.com/blog/automotive-technology-2/broadcom-puts-the-connected-car-front-and-center-at-sxsw/</guid>
         <pubDate>March 10, 2014</pubDate>
         <description>AUSTIN, TEXAS The conversation in the halls at South by Southwest Interactive are known for being about cutting-edge, conceptual technology thats likely to make an impact on consumers and businesses. Some of these ideas, such as computer-assisted cooking and digital comic books, are easier to understand because they represent a new twist on an already-familiar experience.But others augmented reality, or space tourism, for example are a tougher sell. Broadcom's SXSW presentation this week on the future of the connected car found a sweet-spot between the two.The talk attracted dozens of people to a convention center ballroom on a crisp Sunday morning and was presented by Ali Abaye, Broadcom's senior director of Automotive in the Infrastructure &amp; Networking Group. [cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_right&quot;] The concept of a technology-lead driving experience isnt a hard one for drivers to understand.Theyre already familiar with Bluetooth technology in their car and have come to depend on advancements in safety such as rear bumper cameras and sensors. Abaye explained that the car has evolved beyond just a mode of transportation. We still want the car to take us from one place to another, but there are a lot of other interesting things coming, he said. Of course, that includes wireless applications that will allow real-time information whether traffic conditions ahead or sudden changes in the vehicles performance to reach the driver when it matters most.And certainly the role of enhanced infotainment for passengers is also part of the equation. But Broadcom's communications technology is unleashing so much more than that, Abaye said. The idea that the technologies in the car could communicate with other devices isnt out of the realm of the imagination.Imagine a smartwatch on the drivers wrist that prompts the car to crack open a window or turn up the radio volume when it senses,</description>
      </item>
      <item>
         <title>Pay TV Goes Global: Countdown to Russias Digital Transition</title>
         <link>https://www.broadcom.com/blog/pay-tv-goes-global-countdown-to-russias-digital-transition</link>
         <guid>https://www.broadcom.com/blog/pay-tv-goes-global-countdown-to-russias-digital-transition</guid>
         <pubDate>January 28, 2013</pubDate>
         <description>Paid television services are on a growth tear in countries around the world and Russia is emerging as a standout. At the CSTB Conference in Moscow this week, Broadcom's Dan Marotta, executive vice president and general manager of the Broadband Communications Group, will address the attendees from the keynote stage to talk about the technologies that enable the portability of video and introduce the companys latest offerings for the countrys upcoming switch to digital TV. Russian audiences increasingly value the convenience and comfort of more sophisticated broadcast services, and are ready to pay for these new technologies, said Sam Rosen, Practice Director at ABI Research. The number of Russian households that subscribe to at least one Pay TV service tipped into the majority (55 percent) last year and is expected to reach 64 percent, or 35.7 million households, by 2016, according to recent research.As the market continues to grow, so does competition among service providers and not just through subscription rates but also based on the features that allow consumers to decide how, where and when they watch television. Thats where Broadcom's technology enters the equation.Broadcom has developed several standards-based connected home technologies tailored to the Russian market to deliver high quality, high-definition content in a secure, cost-effective way.At the same time, Broadcom's technologies are powering the infrastructure that will allow Russias burgeoning landscape to ramp up and meet the demands as they arrive. Todays launch of the BCM7563 will help Russian cable, satellite, terrestrial broadcast and IPTV operators not only transition from analog directly to digital TV programming but also streamline their roll-outs of new programming for customers.It will also allow them to offer interactive, IP-based services, such as digital video recording (DVR), video on demand (VoD), advanced Web browsers and shopping. At the CSTB show this week, Broadcom</description>
      </item>
      <item>
         <title>In-Car Ethernet Paves the Way for New Features, Increased Efficiency [Video]</title>
         <link>https://www.broadcom.com/blog/in-car-ethernet-paves-the-way-for-new-features-increased-effici</link>
         <guid>https://www.broadcom.com/blog/in-car-ethernet-paves-the-way-for-new-features-increased-effici</guid>
         <pubDate>January 10, 2012</pubDate>
         <description>Being part of Broadcom's Blog Squad provided me with a pre-show peek at the technologies that will be showcased in the Broadcom booth this week.I was particularly intrigued by a demo highlighting the use of Ethernet technology.Of particular interest to me were some innovations around in-vehicle technology.

In the video clip below, Ali Abaye, Broadcom's director of product marketing for the Infrastructure &amp; Networking Group, explains how Broadcom's BroadR-Reach technology offers in-car bandwidth of 100 Mbps over an unshielded twisted-pair Ethernet cable.With the technol0gy, auto manufacturers can now offer more than just in-car infotainment streaming, but also driver safety features, such as cameras that can sense things, like when another driver veers into your lane.

The beauty is that this technology is already in the works and should start appearing in autos in the coming years.
</description>
      </item>
      <item>
         <title>Intel Haswell launch takes servers to the next level of performance</title>
         <link>https://www.broadcom.com/blog/intel-haswell-launch-takes-servers-next-level-performance</link>
         <guid>https://www.broadcom.com/blog/intel-haswell-launch-takes-servers-next-level-performance</guid>
         <pubDate>September 8, 2014</pubDate>
         <description>Today, Intel announced the latest addition to the Xeon product line, the Intel® Xeon® Processor E5-2600/1600 v3 product family, formerly codenamed Haswell. Important? Well, important enough for many of the major server vendors to plan announcements around their next major server lines to coincide. Emulex found plenty of cool new features in this product which, when combined with our features, provides some optimizations for next generation workloads for enterprises, telcos and cloud providers. Customers are faced with an ever-increasing demand for computing and network resources to manage the demands of next generation workloads, meaning big data analytics, in-memory databases, virtual desktop infrastructure (VDI), and emerging compute architectures (such as network functions virtualization (NFV) for telco and OpenStack or OpenCompute-based cloud computing). Emulex and Intel optimize this through: Increased Server Efficiency: Emulex vEngineTM storage and overlay network protocol offloads combined with the 50 percent greater core count and double the memory using the latest DDR technology in the new Intel Xeon processors, drastically increase server efficiency with the ability to support more virtual machines (VMs) or virtual desktop instances (VDIs) per server. Scalable Application Performance: New Intel Xeon processors deliver increased PCI Express (PCIe) device scalability, supporting more adapters per server, enabling increased I/O capacity and optimal performance for applications, such as virtualization, flash, and database management systems. Improved Network Bandwidth Utilization: Emulex Virtual Network Exceleration™ (VNeX) technology offloads the header encapsulation process of next generation overlay network protocols, such as Network Virtualization using Generic Routing Encapsulation (NVGRE) and Virtual Extensible LAN (VXLAN); allowing customers to maintain CPU utilization thresholds and reduce CPU usage fluctuations, while adding more workloads to each server in virtual networking environments. Emulex brings some pretty impressive new performance capabilities to the Haswell platforms coming out from our partners: Improve Server Utilization and Scalability: Emulex OneConnect with</description>
      </item>
      <item>
         <title>16GFC: Much more than a speed bump</title>
         <link>https://www.broadcom.com/blog/16gfc-much-more-than-a-speed-bump</link>
         <guid>https://www.broadcom.com/blog/16gfc-much-more-than-a-speed-bump</guid>
         <pubDate>October 10, 2011</pubDate>
         <description>In preparation of the release of our latest Host Bus Adapter (HBA) – the LightPulse® 16Gb Fibre Channel (16GFC) – we took a look at its performance and were pretty amazed by the results compared to our previous gen product. The LPe16002 is of course capable of running 16GFC, so you would expect performance to be about double that of 8GFC, but then again, when you dig into the details of the 16GFC spec, you may be disappointed to find out that your data bits aren’t actually flying over the wire at double the speed compared to 8GFC. 16GFC actually runs at 14.025Gbp/s baud rate where 8GFC runs at 8.5 so it’s slower right? Wrong! The designers of the specification did a clever thing when they came up with 16GFC. All previous speeds used a 8b/10b encoding scheme, meaning that for every 10 bits flying over the wire, 8 of them are data and 2 are used to make sure the data is correct, so only 80% of the bits are your data. For 16GFC, they changed the encoding to a much more efficient 64b/66b scheme, so much less of the bits are wasted for coherency, and a bigger chunk of it is your data. So the bottom line is that 16GFC link rate delivers twice the data deliver over 8GFC. But saying the new LPe16002 HBA can deliver twice the performance over 8GFC HBAs is the expectation, but there is much more to the story. So sure, as you can see in figure 1, the 16GFC LPe16002 HBA is capable of 1576MB/s compared to 789MB/s for our 8GFC LPe12002 HBA, almost exactly double. The LPe16002 is the first HBA with an 8-core processor and can deliver performance that is actually 5x that of previous adapters. Figure 1. Max I/O</description>
      </item>
      <item>
         <title>Innovation powers HDTV and Pay-TV to reach larger audiences</title>
         <link>https://www.broadcom.com/blog/television-2/innovation-powers-hdtv-and-pay-tv-to-reach-larger-audiences/</link>
         <guid>https://www.broadcom.com/blog/television-2/innovation-powers-hdtv-and-pay-tv-to-reach-larger-audiences/</guid>
         <pubDate>March 12, 2012</pubDate>
         <description>Broadcom continues to unveil technology that enables providers and operators to deliver a 21st Century television experience to larger audiences.

Today, at the Cable Labs conference in Philadelphia, Broadcom announced a new System on a Chip (SoC) technology that will help accelerate the HDTV transition for some 45 million analog TVs in North America that are currently connected to a cable TV service.The company also announced a new SoC offering that will allow Pay-TV providers with the technology to provide both broadcast programming and Web-based applications in a stable and secure environment.
Related Coverage: Web Apps Find a Secure Path to the Living Room TV
While the news of a technology to help transition analog sets to an HD signal is good news for consumers still in an analog world, the real value comes in the form of freeing bandwidth that can be used to launch more HD channels with higher speed Internet services.In the short-term, the TV quality gets better.In the long term, theres potential to revolutionize the full TV experience for everyone.

On that same front, Broadcom's new BCM7435 SoC opens the flood gates for PayTV operators who have been eyeing Web apps as part of their on-screen offerings but have been waiting for a technology that would allow them to offer apps without compromising the delivery of existing high-quality broadcast programming.The BCM7435 allows operators to control and monitor the two streams of content in a secure, behind-the-scenes process that protects the two environments from being disrupted or compromised by the other.

While the enhancements to user experiences are important, its also worth noting that Broadcom's technologies also focus on things like reducing power consumption, improving performance, increasing efficiency and enhancing end-user experiences.</description>
      </item>
      <item>
         <title>Pay TV Goes Global: Broadcom Brings Next-Gen Satellite TV into India</title>
         <link>https://www.broadcom.com/blog/emerging-markets/pay-tv-goes-global-broadcom-brings-next-gen-satellite-tv-into-india/</link>
         <guid>https://www.broadcom.com/blog/emerging-markets/pay-tv-goes-global-broadcom-brings-next-gen-satellite-tv-into-india/</guid>
         <pubDate>July 8, 2013</pubDate>
         <description>Television is changing in India and not just quality and variety of pictures on viewers screens but also the services and features that providers are now able to offer their customers. Its all part of an on-going shift that stems from a couple of regional trends the years-long changeover from analog broadcast to digital transmissions, as well as the deregulation and subsequent explosion of new operators on-hand to offer TV upgrades. Its a big market with a lot of customers ready to explore new content and services.With a projected 158 million Pay TV households expected in India by 2018, the country falls just behind China as the second biggest global growth market for Pay TV. That has prompted TV operators to compete with each other by offering more sophisticated features things like high-definition channels, digital video recording and video-on-demand. Offering high-definition (HD) channels, for example, has become a competitive advantage for satellite operators, according to Rajiv Kapur, senior director of business development in the Broadband Communications Group at Broadcom's Bangalore office.The satellite operators have started to market heavily into HD to retain customers and attract new ones, so there are more marketing efforts for features and promotions around these services, he said. To find their competitive edge, some operators in India are turning to Broadcom's satellite set-top-box technology to bring advanced features to life, not only at a lower price point but also without sacrificing industry-leading integration, features and performance.Broadcom recently unveiled two customer wins in the Indian satellite TV market. Dish TV India is the subcontinents biggest direct-to-home TV operator, with more than 400 channels and some 15 million subscribers. The company also reaches 8,000 towns in India via a vast network of distributors and dealers.Dish TV India, a new customer for Broadcom, has picked two highly integrated systems-on-a-chip</description>
      </item>
      <item>
         <title>Broadcom Makes Headlines in India with Connectivity Tech Event</title>
         <link>https://www.broadcom.com/blog/emerging-markets/broadcom-makes-headlines-in-india-with-connectivity-tech-event/</link>
         <guid>https://www.broadcom.com/blog/emerging-markets/broadcom-makes-headlines-in-india-with-connectivity-tech-event/</guid>
         <pubDate>November 25, 2013</pubDate>
         <description>Advanced smartphone features such as rich graphics, mobile payments or the screen-sharing capabilities that come with Miracast typically have been limited to the newest, premium devices sold in the U.S.and Europe. Now, the excitement around those sorts of offerings is spreading to emerging markets, notably India, a country where mobile phones have seen skyrocketing adoption because of their affordability, more than the bells and whistles that they offer. [caption id=&quot;attachment_10394&quot; align=&quot;alignright&quot; width=&quot;288&quot;] Click to expand infographic: Learn more about the mobile technologies coming to affordable smartphones in India.[/caption] At a recent media event in New Delhi, Broadcom executives showcased technologies that bring these advanced features to the affordable smartphone category, allowing phone manufacturers and carriers that serve India to elevate their offerings for an increasingly-sophisticated customer base. Broadcom's SoCs (Systems-on-a-Chip) for affordable smartphones feature a best-in-class suite of connectivity options, including NFC (Near Field Communication), Bluetooth Low Energy, enhanced Wi-Fi and support for global navigation technologies. The journalists who attended the event last week were wowed by the demonstrations of the technologies in action and, through their news coverage, explored the potential impact on the industry in India.Among them were Roydon Cerejo, who wrote in a blog post on Tech2India: Broadcom's renewed focus on the budget segment could potentially change what customers typically expect from smartphones better wireless connectivity for affordable smartphones given the boom in emerging markets like ours.Providing better wireless solutions in the budget segment is their primary goal and we feel they have a huge advantage here considering thats their pedigree. The writers were also excited about the vast array of uses that can come with embedded NFC technology.Sameer Mitha wrote in a post on Think Digit: Broadcom powered smartphones allow you to read business cards and advertisements, make payments, and do a lot more.If you have</description>
      </item>
      <item>
         <title>Broadcom Sees More Collaboration with Chinese Tech Companies</title>
         <link>https://www.broadcom.com/blog/emerging-markets/broadcom-sees-more-collaboration-with-chinese-tech-companies/</link>
         <guid>https://www.broadcom.com/blog/emerging-markets/broadcom-sees-more-collaboration-with-chinese-tech-companies/</guid>
         <pubDate>June 25, 2015</pubDate>
         <description>Demand for pay TV and the network infrastructure to support the digital home is booming in emerging markets. Market watchers predict that China will have 323 million pay-TV households by 2020, with India supplying a further 179 million.China, India and Japan will jointly account for two-thirds of the regions $42 billion pay-TV revenues by 2020, according to Digital TV Research. Broadcom has long showed its commitment to serving Chinese customers and the fast-growing population of consumer electronics buyers there with silicon targeting broadband rollouts, the Internet of Vehicles, Internet of Things, telecom and wearables markets.As such, company executives are in Asia this week to host more than 100 technology journalists and talk to them about the companys strategies in the worlds most populous country. At a media event this week, the company announced three memorandum of understanding agreements, or MOUs, which signify budding customer partnerships.Broadcom recently inked deals with three key companies in China, including StarTimes, a Beijing-based pay TV operator, Inspur Group, a Shandong-based systems integrator, and H3C Technologies Co., a Hangzhou-based networking company. Broadcom is recognizing the increasing importance of China, not just as a place where products with its chips get assembled, but as a base for new customers, as well. Here's just a few of its recent engagements with Chinese companies delivering innovative technologies in the region: H3C Technologies, a leading provider of IP infrastructure products, will explore new market requirements for the next-wave of networking, including cloud-scale networking, software-defined networking and the bring your own device trend. Inspur and Broadcom are set to drive continued innovation in 4K Ultra HD set-top box offerings for China, which saw skyrocketing shipments of Ultra HD TV sets swell to 2.6 million in the first quarter, IHS data showed.The agreement will tap Broadcom's compression technology expertise and Inspurs unique</description>
      </item>
      <item>
         <title>Myriad and Broadcom Partner to Deliver Android Ecosystem to TV</title>
         <link>https://www.broadcom.com/blog/ces/myriad-and-broadcom-partner-to-deliver-android-ecosystem-to-tv/</link>
         <guid>https://www.broadcom.com/blog/ces/myriad-and-broadcom-partner-to-deliver-android-ecosystem-to-tv/</guid>
         <pubDate>January 13, 2012</pubDate>
         <description>The traditional passive television could well be a thing of the past. Broadcom and Myriad Group have teamed up to deliver a new set-top box called Alien Vue, bringing Android interactivity to home theater systems.What &quot;sets&quot; Myriads solution apart (pun intended) from a typical Android-powered smartphone is the operating system itself: Alien Vue runs a virtual engine called Dalvik, normally responsible for launching programs within the Android OS. Broadcom understands the power and memory demands from modern cell phone processors and, in response, has altered the common Android source code to run on its SoC (system-on-a-chip). Myriad started with the Dalvik engine and devised a way to strip it from the rest of the system, eventually mapping it to Broadcom's hardware. [caption id=&quot;attachment_706&quot; align=&quot;alignright&quot; width=&quot;150&quot;] Broadcom and Myriad team up on specialty set-top box.Photo by Willy Wong.[/caption] The end result is a fluid, high-definition, app-based experience. Popular games like &quot;Angry Birds&quot; can still be run at a consistent 60 frames per second, even without the Android subsystem.These apps are identical in every way to the standard apps from the Android Market.Since the Alien Vue is a self-containing system, the set-top box does not have to be concerned with security or malicious applications; basic Android iterations would instead require a resource-intensive security mechanism. The product is also compatible with Myriads Connect &amp; Share, which enables music, photos and videos to seamlessly stream between multiple devices. Broadcom and Myriad have achieved efficiency by separating necessary software from unnecessary hurdles; low power and high-fidelity is the name of this connectivity game. [caption id=&quot;attachment_177&quot; align=&quot;alignleft&quot; width=&quot;119&quot;] Prashant Mantha, Blog Squad Member[/caption] Prashant Mantha Prashant is a student at University of California, San Diego, while also working as an intern for Broadcom's Mobile and Wireless Group. Hes an experienced writer who started as the editor</description>
      </item>
      <item>
         <title>Bluetooth Pairing with Your TV</title>
         <link>https://www.broadcom.com/blog/ces/bluetooth-pairing-with-your-tv/</link>
         <guid>https://www.broadcom.com/blog/ces/bluetooth-pairing-with-your-tv/</guid>
         <pubDate>January 18, 2012</pubDate>
         <description>Bluetooth has proven to be the dominant wireless technology in the office and on the go for connecting accessories and peripherals to our computers and mobile devices. The one domain into which Bluetooth has only lightly treaded until now, though, is the living room. That is quickly changing, however, as companies like LG Electronics have started adopting Bluetooth into their new televisions. So what can Bluetooth do for your TV? To view 3D images on a TV, a slightly different image needs to be transmitted to each eye. One way to achieve this is with active-shutter technology. Active-shutter glasses have LCD lenses that are capable of electronically darkening each lens. Infrared (IR) signals emitted from the TV will synchronize the shuttering of the lenses with alternating images on the TV. There are many downsides to using infrared for communication, however. IR requires a line-of-sight between the transmitter and receiver, which will limit both the versatility and the viewing angle of the glasses. It is also prone to interference from other light sources in the room, since they all radiate IR light as well. Worse yet, up until a few months ago, there was no standard IR communication protocol amongst manufacturers, so any given pair of glasses worked on only one brand of TV. The Full HD 3D Glasses Initiative wants to standardize active-shutter 3D technology, and Broadcom-enabled Bluetooth chips are helping to make that happen. Bluetooth is a more power-efficient alternative that doesnt require line-of-sight and isn't prone to interference from lights in the room. Gesture-Based Remotes. The Nintendo Wii has shown the world that gesture-based remotes can indeed be fun and enjoyable. It's no wonder that other companies have followed suit. The Bluetooth-enabled Roku 2 XS remote brings a new level of interactivity with your television by adding gestured-based</description>
      </item>
      <item>
         <title>Are You Wi-Fi Ready? Smart Devices Need Robust 5G WiFi to Maximize Their Potential</title>
         <link>https://www.broadcom.com/blog/are-you-wi-fi-ready-smart-devices-need-robust-5g-wifi-to-maximi</link>
         <guid>https://www.broadcom.com/blog/are-you-wi-fi-ready-smart-devices-need-robust-5g-wifi-to-maximi</guid>
         <pubDate>January 5, 2015</pubDate>
         <description>When the Internet comes to a crawl, frustrated users are quick to question strength of their signals or the performance of their devices.But theres something else that could be slowing the experience and chances are that its not the broadband connection. Its the router. Todays routers have a greater task than routers of just five years ago.The modern day router has to serve many devices old and new and with varying capabilities on the same home or office wireless network. In homes today, where consumer smartphones, tablets, set-top boxes and even appliances have joined the personal computer on the wireless network, the router tucked away in an upstairs closet is susceptible to slowdowns. Today, at the International Consumer Electronics Show, Broadcom is unveiling a suite of 5G WiFi-enabled router products designed to bring 802.11ac performance to the modern home Wi-Fi router or workhorse enterprise access points so that speedier, bandwidth-busting hubs can better serve every connected device. Home and office networks have failed to keep up with the multi-device era, said Sanjay Noronha, director of product marketing for wireless connectivity at Broadcom.Broadcom is announcing two new 5G WiFi access point solutions that will deliver the industrys fastest Wi-Fi performance and extend 802.11ac performance to mainstream users. Broadcom's second-wave 5G WiFi routers, switches, and gateways support the latest 802.11ac standards, as well as older Wi-Fi standards 802.11a/b/n, and come in four varieties: BCM47094: 4x4 802.11ac Wave 2 multi-user MIMO router SoC for professional users BCM4366: 4x4 802.11ac Wave 2 multi user MIMO internet gateway for set-top box providers BCM53573 and BCM47189: dual-band 2x2 SoCs for affordable and mid-tier residential routers and bridges With the announcement of new Broadcom 5G WiFi access points, home and office users get considerable performance gains over previous wireless Wi-Fi standards, Noronha said. Its like buying a</description>
      </item>
      <item>
         <title>A BroadR-Reach for the Connected Car at CES 2015</title>
         <link>https://www.broadcom.com/blog/a-broadr-reach-for-the-connected-car-at-ces-2015</link>
         <guid>https://www.broadcom.com/blog/a-broadr-reach-for-the-connected-car-at-ces-2015</guid>
         <pubDate>January 7, 2015</pubDate>
         <description>LAS VEGAS A stroll through the Las Vegas Convention Center this week left no doubt that the biggest mobile device on display is the automobile.But unlike a traditional car show, no one at the International Consumer Electronics Show is asking about gas mileage and road performance. Instead, at the biggest technology show of the year, carmakers are touting automotive connectivity from the integration with smartphones and tablets to a growing auto app ecosystem to the technologies that will someday deliver self-driving cars to the mainstream. [caption id=&quot;attachment_14077&quot; align=&quot;alignright&quot; width=&quot;218&quot;] Broadcom's Connected Car demo at CES 2015[/caption] Broadcom has been talking about the connected car for some time now, largely around BroadR-Reach, a special flavor of Ethernet for cars that uses single, unshielded, twisted-pair cabling.Timothy Lau, director of Automotive Connectivity at Broadcom, said the technology can deliver up to an 80 percent reduction in connectivity cost for automakers and a 30 percent reduction in cabling weight on the vehicles themselves. For car buyers, these technological advancements deliver real features that consumers value, things like rear cameras and sensors, driver assistance tools and advanced infotainment capabilities. Next up is power and network security. At the show, Broadcom showed how BroadR-Reach can power a shark fin antenna that captures AM and FM radio frequencies and delivers them to the car.Yes, radio broadcasts have been in cars for generations but whats most impressive about this new antenna is that it also demonstrates BroadR-Reachs support for Power over Ethernet (PoE). Not only can we send data at 100 megabits per second over a single pair unshielded twisted cable, we can send power as well, Lau said. And there are other benefits for automakers starting to adopt BroadR-Reach into their models, including Volkswagen and BMW, which both made announcements at the show. Among them: A secure</description>
      </item>
      <item>
         <title>Broadcom Engineer Sophie Wilson Named Computer History Museum 2012 Fellow</title>
         <link>https://www.broadcom.com/blog/broadcom-engineer-sophie-wilson-named-computer-history-museum-2</link>
         <guid>https://www.broadcom.com/blog/broadcom-engineer-sophie-wilson-named-computer-history-museum-2</guid>
         <pubDate>January 19, 2012</pubDate>
         <description>Each year, a &quot;whos who&quot; of the technology world assembles in Silicon Valley at The Computer History Museum to honor industry leaders who have forever changed the world with their accomplishments. The Mountain View, Calif.-based museum's annual Fellows award, which has grown to 54 distinguished members, recognizes each honoree's role in the advancement of computing history and the impact of their contributions. Sophie Wilson, who is a Director of IC Design in Broadcom's Cambridge, U.K.office, has been named among the 2012 honorees.Wilson is recognized alongside fellow honoree Steve Furber for their previous work as chief architects of the ARM processor architecture. Other 2012 Fellows include Edward A.Feigenbaum, pioneer of artificial intelligence and expert systems, and Fernando J.Corbat, pioneer of timesharing and the Multics operating system. &quot;Its hard to believe that ARM has shipped over 30 billion CPU cores, and how much the world has changed since we were designing it,&quot; said Wilson, who is also a Fellow of the Royal Academy of Engineering and the British Computer Society.&quot;The Computer History Museum recognition of innovation and its exhibitions help people to understand this.&quot; Wilson is in excellent company. Computer History Museum Fellows include Gordon Bell, Morris Chang, Douglas Engelbart, Bill Joy and Gordon Moore. &quot;The Fellows program recognizes the leading figures of the information agemen and women who have shaped the computing revolution and changed the world forever,&quot; said John Hollar, museum president and CEO.&quot;The Fellows are a tremendously distinguished group, and we are honored to celebrate their work and achievements.&quot; The four 2012 honorees are set to be inducted into the Museums Hall of Fellows on April 28.To learn more, read the press release. About Sophie Wilson Wilson began studying computer science at Cambridge University in 1975.In 1977, she developed an automated cow-feeder for a Harrogate company during vacation, and</description>
      </item>
      <item>
         <title>What's Hot for CES 2014? Broadcom Talks Technology Trends at Geek Peek Event</title>
         <link>https://www.broadcom.com/blog/whats-hot-for-ces-2014-broadcom-talks-technology-trends-at-geek</link>
         <guid>https://www.broadcom.com/blog/whats-hot-for-ces-2014-broadcom-talks-technology-trends-at-geek</guid>
         <pubDate>December 3, 2013</pubDate>
         <description>For many, the arrival of December marks the beginning of the holiday season.But for the technology industry, December represents a different kind of frenzy the final countdown to next months annual Consumer Electronics Show in Las Vegas considered to be one of the biggest shows of its kind. During a single week in January, every tech insider bloggers, buyers, investors and press will be clamoring for a peek at the consumer devices that will get the most attention at the show and in the months (and, sometimes, years) ahead. [caption id=&quot;attachment_10460&quot; align=&quot;alignright&quot; width=&quot;329&quot;] Click to expand infographic and learn more about Broadcom's technologies.[/caption] Here at Broadcom, we like to get the conversation started early.Because we develop the technology that powers many of the must-see products at the show, whether it's a smartphone equipped with next-generation Wi-Fi or a home appliance that wirelessly communicates with the network, we think its important to offer a sneak peek from a technology perspective of whats on the horizon for CES. This week, weve invited some of the top tech journalists to join us in San Francisco for a look at the technologies that will drive trends in 2014.Called Geek Peek, our media event will be hosted by Henry Samueli, Broadcom's Co-Founder, Chairman of the Board and Chief Technical Officer, who along with a handful of other Broadcom executives will offer some insight on the technologies that Broadcom is focused on for 2014 and beyond. Obviously, were not sharing too many details in this post, but we did want to provide a quick run-down of some of the topics were most excited about. They include: Wearables: Weve heard about smartwatches and eyeglasses connected to the Internet part of a new category of connected devices called Wearables but those are just the products that are stealing</description>
      </item>
      <item>
         <title>Broadcom's Reach at CES 2015 Catches Media Attention</title>
         <link>https://www.broadcom.com/blog/broadcoms-reach-at-ces-2015-catches-media-attention</link>
         <guid>https://www.broadcom.com/blog/broadcoms-reach-at-ces-2015-catches-media-attention</guid>
         <pubDate>January 9, 2015</pubDate>
         <description>LAS VEGAS -- The International Consumer Electronics Show, much like most industry trade shows, allow companies a chance to showcase their latest products, offer a peek at products on the year-ahead road map and hold face-to-face meetings with customers and partners. But one of the bigger wins at a show like CES is news coverage.Thousands of journalists and bloggers from around the world converge on Las Vegas for CES so that their readers and viewers can also get a glimpse inside.For any company, coverage in the news is a big win and Broadcom was fortunate to get some attention, ranging from big headlines to passing mentions. Broadcom met with a number of journalists at the show, who were interested in a variety of tech topics where Broadcom's technologies stand out.Some of the highlights from news coverage included: Broadcom Chief Executive Officer Scott McGregor met with CNBCs Jon Fortt to discuss the news around Ethernet in the car and chat about the connectivity that will allow wearable devices to communicate with each other. Bloombergs Brad Stone also met with McGregor to chat about Ultra HD TV and some of the technologies that will help Ultra HD content (which is becoming more common) reach the Ultra HD sets (which have been coming down in price) over the next year. McGregor also met with The Streets Chris Ciaccia to discuss a number of topics, including the Internet of Things, smart TVs and the next evolution of broadband. Across the blogosphere, the interest in Broadcom news and products was widespread.Venture Beats Dean Takahashi chimed in about faster Wi-Fi and the impact that will have on wireless streaming while PC Worlds Mark Hachman focused on Broadcom's news around the sampling of the first DOCSIS 3.1 chip and how gigabit speeds will impact cable TV. Junko</description>
      </item>
      <item>
         <title>MoCA Delivers Performance Boost for TV Home Network</title>
         <link>https://www.broadcom.com/blog/moca-2-0-performance-boost-for-tvs-home-network</link>
         <guid>https://www.broadcom.com/blog/moca-2-0-performance-boost-for-tvs-home-network</guid>
         <pubDate>January 4, 2012</pubDate>
         <description>One of the central themes across this year's Consumer Electronics Show is connectivity.And one of the areas where connectivity will be most evident is around the concept of the connected home, specifically as it relates to shared entertainment. Connectivity in the home isn't new, but it's getting more advanced and becoming more mainstream - and that's already creating some excitement ahead of this year's CES kickoff.Next week, Broadcom will be showcasing an update to a technology called MoCA, short for Multimedia over Coax Alliance. Among consumers, MoCA may not be a familiar buzzword - largely because it's a technology that consumers don't have to think about.In the industry, though, the technology is widely recognized and already utilized by some of the bigger names in the business - from Comcast and Cox to DirecTV and Dish Network, among others.It's fast becoming the industry standard for home entertainment networking. Consider the obvious benefits: Coaxial connections are common in most homes today, used by the cable and satellite industries since their inception as a means of delivering high quality video through set-top boxes and into television sets.Coax offers high capacity and low latency and is shielded from noise and interference, especially when compared to wireless.Coax also works across various platforms - whether cable, teclo/IPTV or satellite - and allows communications between all connected home devices. At the show, Broadcom will showcase the industry's first MoCA 2.0 integrated portfolio, including six new set-top boxes and Hybrid IP Gateway System-on-a-Chip platforms.MoCA 2.0 more than doubles the home network performance and enhances the quality of video distribution in the home.It also enables more energy efficient systems and supports higher levels of security for enhanced content protection. MoCA is already providing services that today's consumers love, from video-on-demand and multi-room DVRs to multi-player gaming and personal content</description>
      </item>
      <item>
         <title>Cutting the Cord: Wireless Charging Coming of Age at CES 2014</title>
         <link>https://www.broadcom.com/blog/cutting-the-cord-wireless-charging-coming-of-age-at-ces-2014</link>
         <guid>https://www.broadcom.com/blog/cutting-the-cord-wireless-charging-coming-of-age-at-ces-2014</guid>
         <pubDate>December 17, 2013</pubDate>
         <description>For an industry thats increasingly going wireless, the one cord that most people would like to cut the power cord to their smartphone is still holding consumers back.But that could soon be changing. Broadcom is working to bring wireless charging technology to the forefront by making it more powerful, more flexible and easier to use.And theres plenty of support around the effort from technical alliances to device manufacturers, automakers and even the furniture industry. The Consumer Electronics Show in Las Vegas next month may prove to be the tipping point for wireless charging pads and other related devices.As the industrys biggest trade gathering, there will again be dozens of them on display, while the numerous companies that make them duke it out on the show floor. Related: Broadcom Announces Bluetooth Smart SoC with Wireless Charging Support for Growing Wearable Market These days, there are three different standards for wireless charging, including the Power Matters Alliance (PMA) the Wireless Power Consortium (WPC) and the Alliance for Wireless Power (A4WP), which recently unveiled its Rezence consumer-facing brand.Each standards-setting body is backed by hundreds of consumer device and chip companies. Industry-watchers see the groups coalescing around a single standard sometime next year. Nothing takes off in a big way until it becomes an industry standard that guarantees interoperability of multiple products from different vendors, Broadcom Co-Founder, Chairman and Chief Technical Officer Henry Samueli said in a recent Q&amp;A interview.We will see more and more wireless charging solutions come on the market as the standards crystallize in 2014. The end goal, Samueli said, is convergence around a single industry standard so you can charge any phone on any charging plate. Wireless charging isnt new but the first generation of the technology, used by more than 40 different smartphones relies on inductive technology that has</description>
      </item>
      <item>
         <title>Netgear Picks Broadcom for Powerline Push</title>
         <link>https://www.broadcom.com/blog/home-networking/netgear-picks-broadcom-for-powerline-push/</link>
         <guid>https://www.broadcom.com/blog/home-networking/netgear-picks-broadcom-for-powerline-push/</guid>
         <pubDate>January 7, 2013</pubDate>
         <description>They call it no-hassle networking. Its a phrase thats music to our ears this week at the International Consumer Electronics Show, where every kind of shiny, new gadget is vying for both an Internet connection and spot in our homes. Weve talked about the simplicity and intuitiveness of Powerline Communications and how the HomePlug Alliance, a Broadcom partner and champion of Powerline networking standard HomePlug AV, is taking the technology to the next level with new products all over the show floor. [caption id=&quot;attachment_1260&quot; align=&quot;alignright&quot; width=&quot;147&quot;] A Powerline Networking Adapter extends Internet connectivity at home via electrical outlets.[/caption] Companies like D-Link Systems, Belkin, Devolo, Asus, Buffalo and more are pushing Powerline adapters, and today Netgear joined the fray with its latest lineup of Powerline adapters built around Broadcom's BCM60321 HomePlug AV system-on-a-chip for consumers, businesses and service providers. Netgears adapters are certified with the HomePlug AV1.1 standard, which ensures that the adapters will play nice with both Wi-Fi and older versions of HomePlug AV technology. Powerline adapters are virtually plug and play: One goes into an outlet near your broadband modem.A second closes the loop at an outlet elsewhere in the house. Not only does Powerline networking cut down on extra wires throughout the house, it helps Internet-connected devices get the throughput they need to perform their bestup to 200Mb/second. Powerline networking is a boon to service providers, too, because they can offer new content freshly tailored for gaming consoles, smart TVs and set-top boxes. Couldnt make it to Vegas? Get the latest CES news from Broadcom and our partners by liking us on Facebook, following us on Twitter and reading the blog. Related: AnandTech: Netgear Selects Broadcom HPAV Solution for Powerline Products Powerline Communications: Standard Outlets Boost Home Networks Powerline Networks Get Boost with New Devices from Devolo Review:</description>
      </item>
      <item>
         <title>IPTV Gets Supercharged with 5G WiFi</title>
         <link>https://www.broadcom.com/blog/home-networking/iptv-gets-supercharged-with-5g-wifi/</link>
         <guid>https://www.broadcom.com/blog/home-networking/iptv-gets-supercharged-with-5g-wifi/</guid>
         <pubDate>January 8, 2013</pubDate>
         <description>Perhaps the most salient feature of the always-on digital age is that consumers want their content, and they want it now. Whether its viewed on a smart TV, a tablet or a smartphone, streamed via an Internet-connected set-top box or stored on a laptop and beamed to another device, the expectation is still the same: Access to content should be easy, seamless, and available anytime, anywhere in the home. For cable, satellite and other pay-TV operators around the world, the need to deliver this kind of connectivity will soon bump up against the limitations of broadband in the home.A recent study from Bell Labs, the research arm of Alcatel-Lucent, shows that the increasing consumption of video content on mobile devices is expected to push wired broadband networks to their absolute limits over the next decade. That means operators will be challenged to provide consumers with a high-quality offering of triple play services that are in such high demand around the globe. Bell Labs study showed that as delivery of video content moves from traditional broadcast TV to the the delivery of personalized content on demand, disproportionate pressure will be placed on the Internet Protocol edge of these networks. One solutionwhich Broadcom is demoing this week at the International Consumer Electronics Showis to help broadband operators offload some of their networking burden onto the more robust, more reliable, next-gen 5G WiFi. Today, Broadcom announced that it's pairing up 802.11ac-speed Wi-Fi with its proven IPTV set-top box technology, enabling operators to wirelessly deliver high-def content and services to their customers. It also enables them to offer customers high-value services including multiroom DVR, Internet browsing via TV, gaming, specialized over-the-top content and other types of add-onsall without wires.There could soon be a day when subscribers can wirelessly download content, stream video, and access</description>
      </item>
      <item>
         <title>Rich Nelson in Light Reading: DOCSIS 3.1 Can &quot;Bring Nearly a 100x Increase in the Average Data Rate to the Home&quot;</title>
         <link>https://www.broadcom.com/blog/rich-nelson-in-light-reading-docsis-3-1-can-bring-nearly-a-100x</link>
         <guid>https://www.broadcom.com/blog/rich-nelson-in-light-reading-docsis-3-1-can-bring-nearly-a-100x</guid>
         <pubDate>July 17, 2015</pubDate>
         <description>Editor's Note: Broadcom experts often weigh in on popular topics on industry sites around the Web. Below is a reprint of a story that appeared in Light Reading, in which Rich Nelson, Senior Vice President of Marketing, Broadband and Connectivity Group at Broadcom, talks about the advantages of the DOCSIS 3.1 standard. From Light Reading: Your cable Internet is about to speed up -- a lot. Historically, consumers have had limited options for home Internet, mainly over existing cable or phone lines. Although fiber deployments have been offering consumers gigabit speeds in the past year or so, it's not widely available. That will soon change as a number of service providers have announced new gigabit services in more than 150 communities in the US. Later this year, cable companies will have the tools to offer those same gigabit speeds quickly and easily through DOCSIS 3.1 -- and without tearing up the streets. Benefits of DOCSIS 3.1 In an era where consumers demand both high-quality broadcasts and high-bandwidth streaming television content, real-time interactive gaming, and remote home monitoring, DOCSIS 3.1 brings nearly a 100x increase in the average data rate to the home. This will eventually give consumers streaming content, such as Netflix, the kind of bandwidth needed to stream Ultra HD content to multiple screens and download an entire 14GB digital movie in less than two minutes. What's more, DOCSIS 3.1 offers two very significant benefits to cable operators. First, DOCSIS 3.1 is 25% more efficient than earlier versions of DOCSIS. This translates to hundreds of megabits more bandwidth, without making any changes to the network. Perhaps more importantly, the new standard results in higher capacity to those networks that were already 100% utilized. How DOCSIS 3.1 works: the nuts and bolts Until recently, DOCSIS standards used single-carrier, quadrature amplitude</description>
      </item>
      <item>
         <title>Broadcom's Sophie Wilson Honored as Computer History Museum Fellow</title>
         <link>https://www.broadcom.com/blog/broadcoms-sophie-wilson-honored-as-computer-history-museum-fell</link>
         <guid>https://www.broadcom.com/blog/broadcoms-sophie-wilson-honored-as-computer-history-museum-fell</guid>
         <pubDate>April 22, 2012</pubDate>
         <description>Sophie Wilson, a technologist whose innovative contributions paved the way for todays mobile phones, tablet computers, digital televisions and video game devices, is being honored this week at the 25th anniversary Computer History Museum Fellow Awards in Silicon Valley. Wilson, Broadcom's Director of Integrated Circuit Design, and fellow honoree Steve Furber, professor of Computer Engineering at the University of Manchester, are being recognized for their work on the BBC Micro and design of the ARM processor architecture. The ARM processor core is now used in thousands of different electronics products. In his book, ARM System-on-Chip Architecture, Furber notes that Wilsons original instruction set architecture survives, extended but otherwise largely unscathed, to this day. To date, more than 32 billion ARM cores have shipped, with nearly 7 billion of them shipped in 2011 alone. Sophie Wilson being named a Computer History Museum Fellow acknowledges the magnitude of her contributions to the tech world and we are proud to have her expertise at Broadcom, said Greg Fischer, Broadcom Vice President &amp; General Manager for the Broadband Carrier Access Business Unit. Her contribution to Broadcom's own FirePath DSP - which was the foundation of Element14 (a company she co-founded with six others that was acquired by Broadcom) and subsequently our industry-leading DSL business - is now used in a variety of Broadcom products including DSL, STB, VoIP, PLC and small cell base stations. Wilson joins a prestigious league of Computer History Museum Fellows who have been recognized for their roles in the advancement of computing and the impact of their contributions. Other 2012 awardees include technology leaders Edward A. Feigenbaum, pioneer of artificial intelligence and expert systems, and Fernando J. Corbat, pioneer of timesharing and the Multics operating system. Past Computer History Museum Fellows include a veritable whos who of elite innovators, including</description>
      </item>
      <item>
         <title>Interop Preview: Network Infrastructure in the Spotlight</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/interop-preview-network-infrastructure-in-the-spotlight/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/interop-preview-network-infrastructure-in-the-spotlight/</guid>
         <pubDate>May 6, 2013</pubDate>
         <description>As youre reading this blog post, a myriad networking processes are happening quietly in the background: data in the form of bits and bytes appears on your screen, parsed and sorted on command by switches, likely sourced and stored in massive data centers where servers comprise this nebulous thing we call the cloud. Considering that all of this occurs at blink-and-you-miss-it speeds and goes virtually unnoticed by the user, the technology is pretty impressive. At Broadcom, we live and breathe this stuff the intricate data dance thats wrapped up in the backend network infrastructure enables all your Internet-connected devices to do what they do.The tech spotlight will be shining on the behind-the-scenes world of networking and infrastructure at the annual Interop conference, which opens this week in Las Vegas. Data center networking sometimes called big data has been becoming more visible lately in the news with headlines about Facebooks $1.5 billion mega data center in Iowa and Google pledging $2 billion to improve its data centers around the world this year. With that much investment on the table, network managers and IT experts converging at Interop will certainly be pulling out all the stops to up their network infrastructure game. In addition to showing off their latest product advancements, Interop exhibitors will be sending execs to speaker sessions and workshops, all centered on a group of topics: Big Data, Software-Defined Networking (SDN), Cloud platforms, security and BYOD (Bring Your Own Device). Make no mistake; the future of these trends will determine how every business on the planet operates in the coming years.Big Data is set to create smarter cities through optimized power grids, SDN is expected to make huge efficiency strides in power-hungry data centers, the cloud can enable local businesses to have global reach and network security is slated</description>
      </item>
      <item>
         <title>Rochan Sankar in SDN Central: &quot;Ethernet Speed Transitions Trigger Rapid Change in Ethernet Performance&quot;</title>
         <link>https://www.broadcom.com/blog/rochan-sankar-in-sdn-central-ethernet-speed-transitions-trigger</link>
         <guid>https://www.broadcom.com/blog/rochan-sankar-in-sdn-central-ethernet-speed-transitions-trigger</guid>
         <pubDate>July 18, 2014</pubDate>
         <description>Editors Note: Broadcom experts often weigh in on popular topics on industry sites around the Web. Below is a reprint of a story that appeared in SDN Central, in which Rochan Sankar, Director, Product Marketing in the Infrastructure and Networking group at Broadcom, talks about the new 25/50 Gigabit Ethernet Standard. From SDN Central: A great deal of attention has historically been paid to Ethernet speed transitions in the market. Networking vendors, consumers, and industry analysts closely follow these transitions, because they can trigger new technology buying cycles and periods of rapid change in the Ethernet performance-cost curve. The rise of cloud computing and scale-out data centers has driven the latest Ethernet speed transitions, evidenced by the explosive growth in server-facing 10-Gbit/s ports this decade, and more recently the breakout in 40-Gbit/s Ethernet deployment particularly in the leaf-to-spine layer of the data center to an expected 2.5 million-plus ports in 2014. As big data becomes bigger, virtual machines grow in number, and cloud workloads become more demanding, it is expected that the largest cloud operators will soon shift to 100-Gbit/s Ethernet fabrics for the spine layer of their networks. But what happens to the server- and storage-facing Ethernet downlinks when leaf-to-spine optical links migrate to 100-Gbit/s Ethernet and CPU/storage endpoints demand greater than 10-Gbit/s network connections? These downlinks represent the largest number of cables deployed in mega-scale data centers (MSDCs), where cabling costs dominate. The IEEE 802.3 standard defines 40-Gbit/s Ethernet as the next higher link speed after 10-Gbit/s Ethernet, but the current standard uses four physical lanes running at 10 Gbit/s to enable communication between link partners. Thats four times the number of physical channels on a servers network interface controller (NIC), four times the amount of copper wiring in the cables to the top-of-rack (ToR) switch, and four</description>
      </item>
      <item>
         <title>Update: Bluetooth Takes Control [Video]</title>
         <link>https://www.broadcom.com/blog/wireless-technology/bluetooth-takes-control/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/bluetooth-takes-control/</guid>
         <pubDate>January 10, 2012</pubDate>
         <description>Bluetooth technology is moving beyond mobile phones and into your living room. A standard feature enabling wireless hands-free communications in mobile phones, Bluetooth is now a household name for its convenience and user-friendliness. Take a tour of what Bluetooth can do to enhance home entertainment: Bluetooth is a key component in wireless game console controllers. Bluetooths footprint is now expanding as top TV makers integrate it directly into televisions, 3D glasses, set-top boxes and Blu-Ray players. Bluetooth can enable a wide range of home entertainment peripherals to radically transform how we interact with them. Bluetooth turns the old-fashioned remote control into a multi-use device.The remote control itself can come as a traditional remote, add a QWERTY keyboard or voice recognition for search capability, or use gestures to select content for viewing or for playing games.With Bluetooth in the remote control, there is no longer a line-of-sight restriction; you can travel with the remote to another room and adjust volume or hit the pause button while grabbing a snack from the fridge. Bluetooth has evolved to deliver amazing battery life.Broadcom recently announced a Bluetooth chip for wireless keyboards that is capable of operating for up to 10 years on just 2 AA batteries. A single Bluetooth chip in the TV can support multiple peripherals simultaneously (remote control, audio streaming, 3D glasses, etc.), sorting out all the signals to ensure a satisfying experience while navigating the increasingly complex features and services available via your TV.Try doing that with an old-fashioned infrared remote! These same chips can be applied to the consumer electronics remote, another reason why Bluetooth is rapidly taking over control of your home electronics. Broadcom is demonstrating its Bluetooth products for the home at the 2012 International Consumer Electronics Show in Las Vegas. Prashant Mantha, Broadcom Blog Squad member, interviewed</description>
      </item>
      <item>
         <title>Broadcom Enhances Display Technology with Miracast Wi-Fi Certification</title>
         <link>https://www.broadcom.com/blog/wireless-technology/broadcom-enhances-display-technology-with-wi-fi-certification/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/broadcom-enhances-display-technology-with-wi-fi-certification/</guid>
         <pubDate>September 19, 2012</pubDate>
         <description>Sharing content wirelessly across devices should be a seamless process that just works. Now, through a new display certification program by the Wi-Fi Alliance, dubbed Miracast, theres an effort underway to standardize how Wi-Fi-enabled smartphones, TVs, laptops and tablets talk to each other. The idea is for consumers to be able to send, share and stream content between Wi-Fi connected devices seamlessly, without an intermediate box such as a router or gateway.Think of Miracast as a seal of approval for electronics devices so that problems with compatibility and interoperability are a thing of the past. Read the Wi-Fi Alliance's press release here. Broadcom is proud to announce that it has been selected by the Wi-Fi Alliance to be part of the first wave of Wi-Fi Alliance Wi-Fi CERTIFIED Miracast devices.For Broadcom, its a significant milestone that validates Broadcom's technologies and the rigorous interoperability testing they undergo.Broadcom has a long history of driving standards-based technology, and this latest certification will ensure unparalleled integration for billions of users seeking simplified ways to enjoy the benefits of Miracast technology. Wi-Fi CERTIFIED Miracast devices use a Wi-Fi connection to deliver audio and video content from one device to another without cables or a connection to an existing Wi-Fi network, according to the Wi-Fi Alliance.The standard allows devices to directly connect to each other so users can do things watch videos stream from a smartphone to a big screen television or share a laptop screen with the conference room projector to collaborate in real-time. For Broadcom, the Miracast certification comes on the heels of the company obtaining Wi-Fi certification for its TDLS technologies (Tunneled Direct Link Setup). Related: Polygon: The Surprising (Mundane) Tech Behind the Wii U's Magical GamePad Its Official: Broadcom's WICED Products Earn Wi-Fi Certification Broadcom's TDLS Solutions Nab Wi-Fi Alliance Certification Wi-Fi</description>
      </item>
      <item>
         <title>Video Demo: 5G WiFi Enables Real-Time Sports Location Tracking</title>
         <link>https://www.broadcom.com/blog/wireless-technology/video-demo-5g-wifi-enables-real-time-sports-location-tracking/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/video-demo-5g-wifi-enables-real-time-sports-location-tracking/</guid>
         <pubDate>July 7, 2014</pubDate>
         <description>Weve been talking about the everyday use cases for indoor location technology but Broadcom is leveraging its expertise in Wi-Fi and GPS to take location technology to the next level. Industry watchers have long been heralding the benefits of location-based services, most of them a boon to mobile advertisers and shoppers.Yet for all of its utility, the uptick in adoption of such technologies has been slow due, in part, to the costs of implementation and privacy concerns. Broadcom's hoping to change that by doing something that others havent yet been able to crack: The real-time tracking of people on the move. There is no consumer-friendly technology today that allows people to figure out their location indoors in real time to this level of accuracy, said Gabriel Desjardins, manager of product marketing in the Broadband and Communications Group at Broadcom. The two dominant options for indoor sports location tracking today involve mobile cameras and clusters of Bluetooth Low Energy beacons, both of which have limitations. Broadcom proposes a standards-based alternative that consumers are already familiar with: ubiquitous Wi-Fi hotspots that are deployed via access points in public places such as offices, malls and college campuses. Broadcom taps 5G WiFi also known as 802.11ac technology, a faster and longer-range Wi-Fi on the 5 Ghz band to pinpoint a smartphone and accurately track its owner. In the video below, Desjardins demonstrates how Broadcom's indoor location technology can accurately track both a basketball player and a sprinter.The technology can even pick up on the microchanges in acceleration and deceleration as a sprinter revs up to full speed and then slows to a halt. With more accurate indoor location information, mobile ads can get more relevant, and mobile phone users can easily get around the places they frequent the most whether a supermarket, mall, library</description>
      </item>
      <item>
         <title>Broadcom’s 16nm PAM-4 PHY technology drives end-to-end cloud and data center interconnects</title>
         <link>https://www.broadcom.com/blog/16nm-pam4-phy-technology</link>
         <guid>https://www.broadcom.com/blog/16nm-pam4-phy-technology</guid>
         <pubDate>March 2, 2017</pubDate>
         <description>The emergence of switch processor chips with 56G PAM-4 interfaces has allowed for new high speed interconnects in the cloud and data center networks. 50GbE, 100GbE, 200GbE and 400GbE are becoming de-facto interfaces for network backplanes, line cards and pluggable transceiver modules, significantly increasing data throughput between two endpoints in a network.

Built on a proven 56G PAM-4 SerDes platform, Broadcom’s latest 16nm PAM-4 PHY devices deliver best-in-class reach performance with integrated FEC and equalization while offering unprecedented power efficiency. Complementing latest switch processor chips with 56G PAM-4 interfaces, this third generation of PAM-4 PHYs fully enables end-to-end PAM-4 connections between network switches and routers, expanding the bandwidth capacity of the cloud and data centers.

 




 

Click on the corresponding links below for more information on the individual PHY products:

1) BCM81330  -- 8x56G PAM-4 PHY for network backplanes

2) BCM81328  -- 8x56G PAM-4 PHY for line card front ports

3) BCM81188 -- 8x56G PAM-4 PHY for pluggable transceiver modules (e.g., CFP8)

4) BCM81141 -- 4x56G PAM-4 PHY for pluggable transceiver modules (e.g., QSFP56)

5) BCM81128 -- 2x56G PAM-4 PHY for pluggable transceiver modules (e.g., QSFP28)

6) BCM81118 -- 1x56G PAM-4 PHY for pluggable transceiver modules (e.g., SFP56)

 
</description>
      </item>
      <item>
         <title>CES Video Demo: Passive Presence Tech for Context-Aware Location Apps</title>
         <link>https://www.broadcom.com/blog/ces-video-demo-passive-presence-tech-for-context-aware-location</link>
         <guid>https://www.broadcom.com/blog/ces-video-demo-passive-presence-tech-for-context-aware-location</guid>
         <pubDate>January 9, 2014</pubDate>
         <description>LAS VEGAS  Wi-Fi access points will soon be doing double-duty.Not only will they connect your favorite devices to the Internet and the cloud, but theyll help triangulate your location indoors  pinpointing it within a meter.

With that kind of accuracy, new doors open for the development of whats called context-aware apps and services.

The underlying technology stems from chips and software that use Wi-Fi access spots to pinpoint a devices location, said Shah Ullah, founder and Chief Executive of startup OmniTrail, which demonstrated its in-the-works product this week at the Consumer Electronics Show in Las Vegas.[cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_right&quot;]

OmniTrails working with Broadcom and a division of Verizon to develop what he called a passive presence solution.

It would enable carriers like Verizon, which operate massive public managed Wi-Fi networks used in hotels, coffee shops, malls and the like, to direct more relevant content and tailored advertising to users.It also enables business to offer value-added information, such as airline carrier directing you to your gate, or a library helping you locate a book.

Passive presence technology enables consumers to interact with a location or a space, said Laura Diaz, associate director of new business ventures at Verizon.

Watch the Blog Squads Evgeny Vinnik get the scoop from Diaz, as she walks through a demo from the Broadcom booth:



Get the latest CES news from Broadcom on our dedicated website.Follow the Blog Squad and join the conversation on Twitter at #connectingeverything, liking us on Facebook and following the blog.</description>
      </item>
      <item>
         <title>Big Power in a Tiny Package: Broadcom Drives New, Smaller Set-Top Boxes that Boost Security, Wi-Fi Performance</title>
         <link>https://www.broadcom.com/blog/big-power-in-a-tiny-package-broadcom-drives-new-smaller-set-top</link>
         <guid>https://www.broadcom.com/blog/big-power-in-a-tiny-package-broadcom-drives-new-smaller-set-top</guid>
         <pubDate>January 6, 2015</pubDate>
         <description>LAS VEGAS -- Once again, this weeks International Consumer Electronics Show is highlighting big trends around the living room television experience.But in addition to thousands of TVs packed with pixels or screens with sleek curves, some of the excitement this year is focused on the familiar set-top box. Responsible for delivering content to the viewers screen, the traditional set-top box (STB) is not only shrinking in size but also gaining new capabilities, bringing new life to a class of palm-sized, plug-and-play streaming devices that deliver cable programming as well as video content over Wi-Fi without the clutter of an extra box and even more wires. This years CES marks a turning point for handheld over the top (OTT) devices that can offer up both cable and streaming content from the Internet.The first-generation versions of these boxes encountered some technical challenges, such as power consumption and security, and lacked the ability for subscribers to access their existing cable subscription content. With cable operators and service providers looking to bring both cable access and streaming capabilities in these smaller form factors to their subscribers, Broadcom is improving the technology so it meets operator-grade quality and offers a top-notch experience for viewers. Broadcom's BCM7250 and BCM72502 chips, announced today at the start of CES, bring full-featured set-top box functionality to compact HDMI stick and popular streaming media player box formats, allowing carriers to deliver services wirelessly anywhere in the home, while talking up less space and with fewer wires. Read the press release here. Broadcom's set-top box technology enables multi-service operators and the ability to offer their customers a powerful Wi-Fi set-top box in a slimmer packagethat doesnt force a tradeoff between security and performance. The BCM7250 is targeted at puck-sized set-top boxes, popular among OTT streaming media players such as Roku and</description>
      </item>
      <item>
         <title>5G WiFi Lineup Continues to Grow: Welcome D-Link</title>
         <link>https://www.broadcom.com/blog/5g-wifi-lineup-continues-to-grows-welcome-d-link</link>
         <guid>https://www.broadcom.com/blog/5g-wifi-lineup-continues-to-grows-welcome-d-link</guid>
         <pubDate>July 17, 2012</pubDate>
         <description>The 5G WiFi ecosystem just continues to grow, as D-Link today became the latest company to offer a product equipped with Broadcom's 5G WiFi technology.

[caption id=&quot;attachment_3001&quot; align=&quot;alignright&quot; width=&quot;300&quot;] Click on the image for an interactive graphic that explores the power of 5G WiFi[/caption]

Called the Cloud Router 5700 (DIR-865L), D-Link's product uses 802.11ac technology (5G WiFi) to deliver wireless speeds of up to 1,750 Mbps - three times faster than the previous generation of Wi-Fi, 802.11n.The differentiation comes not only from gigabit speeds but also higher capacity and broader coverage for home networks - features that are enhancing the adoption of high-bandwidth gaming and HD streaming applications.

Broadcom has been leading the market adoption of 802.11ac by being first to sample and ship the technology.It's focused on enabling the 5G WiFi ecosystem across all major wireless product segments, including routers and mobile devices.D-Link is the latest company to further enable this transition to faster, more reliable wireless coverage for HD-quality video streaming and near instantaneous data synch.

How fast is Broadcom's 5G WiFi and what does it enable? Check out the fun video below for a sampling of its speed and click through the interactive graphic for more detailed look at its capabilities.



Related:

	5G WiFi: Introducing a Wi-Fi Powerful Enough to Handle Next-Gen Devices and Demands
	5G WiFi: Pioneering the New Generation of Wireless Connectivity
	Broadcom at Computex: 5G WiFi and Gigabit Throughput [Video]
	First 5G WiFi Product Hits the Shelves
	5G WiFi Blog

Industry buzz: 

	Slashgear: D-Link announces Cloud Router 5700 
	Geeky Gadgets: D-Link Cloud Router 5700 Now Available

 </description>
      </item>
      <item>
         <title>The Flexible Cloud: Smart-Table Technology Enables Network Scalability</title>
         <link>https://www.broadcom.com/blog/the-flexible-cloud-smart-table-technology-enables-network-scala</link>
         <guid>https://www.broadcom.com/blog/the-flexible-cloud-smart-table-technology-enables-network-scala</guid>
         <pubDate>October 26, 2012</pubDate>
         <description>This summer, Broadcom showcased its newest solutions for cloud-scale networking for the throngs of IT professionals at VMworld. We also launched our latest solution for cloud-scale networking the StrataXGS Trident II. Today, we continue the conversation around cloud-scale network architectures and how our new StrataXGS architecture can ensure that, regardless of cloud network type and design requirements, cost-effective implementation can be achieved in volume scale. Learn more about Broadcom's cloud-scale networking innovations. A critical element of cloud network scalability is the size of the forwarding tables in network switches deployed in the data center.This factor impacts many elements of data center scalability the number of servers and the ability to load-balance and provide full cross-sectional bandwidth across switch links.In turn, these scalability elements directly impact application performance and mobility. Traditionally, data center networks were designed with the basic premise that a server has a single identity, which was composed of one MAC address, one IP address and a single application. Today, virtual machines increase the density of server identities with more MAC and IP addresses, as well as numerous applications.The number and types of active addresses in the data center network (MAC, L3 host and IP multicast addresses, LPM and ARP/next-hop entries) are impacted network topology designs both legacy and emerging.The size of each of the forwarding tables in network switches has a bearing on how cloud networks can scale.When these tables reach capacity because the forwarding tables in switches are small scaling problems occur. An obvious way to deal with these challenges is to increase the forwarding table size, which also requires larger memories and high operating rates.But implementing large memory blocks on high performance silicon can be costly and consume more power.Adding external forwarding table memories is not an option either because of the significantly high throughput demanded</description>
      </item>
      <item>
         <title>Broadcom's Ethernet Switches Deliver the Wave 2 Wireless Workplace</title>
         <link>https://www.broadcom.com/blog/broadcoms-ethernet-switches-deliver-the-wave-2-wireless-workplace</link>
         <guid>https://www.broadcom.com/blog/broadcoms-ethernet-switches-deliver-the-wave-2-wireless-workplace</guid>
         <pubDate>November 18, 2015</pubDate>
         <description>The promise of the all-wireless workplace isnt a new one.For years, evangelists have talked up its many potential benefits to employees: elevating personal productivity, hyper mobility, seamless collaboration and unfettered access to any network via any device. With smartphones, tablets and other mobile devices becoming increasingly common on the job, enterprises large and small are inching closer to that vision. These days, employees expect high-speed, high-bandwidth wireless connectivity at all times.They often chafe against the constraints of the traditional workplace setup docking stations, security firewalls and restrictions on the types of devices that can be used at work. Some offices have met these demands half-way by installing faster 802.11ac Wi-Fi.But, in many cases, enterprises are using a patchwork of Wi-Fi and legacy Ethernet network solutions that often can't keep up with the current mobile computing needs of its employees, let alone the demands of the future. Thats where Broadcom's market leadership in Wi-Fi and switching come together.Today, the company announced the BCM56060 and BCM56160, edge-of-network switches for 802.11ac Wave 2 access points.They bring high speed, robust Internet access to both traditional and all-wireless workplaces to meet booming demand for mobility. Ethernet speeds of 2.5 gigabit per second (Gbps) is set to help IT administrators be cost-effective while scaling wireless networks for on-the-go employees or BYOD (bring your own device) environments.Broadcom actively supports the standardization effort of 2.5 GbE in IEEE. Based on Broadcom's industry-leading StrataXGS architecture, the two new chips help ease the transition to 2.5 /5 Gbps Ethernet speeds at the edge of the network in both hybrid enterprise networks and high-density wireless deployments. 2.5G and 5Gbps Ethernet According to recent data from Infonetics, sales of 802.11ac access points are up almost 10-fold over the past year.This has increased the pressure on enterprise technical officers to better manage and</description>
      </item>
      <item>
         <title>The 5 Ghz Band of Spectrum: Where Wi-Fi Roams Free</title>
         <link>https://www.broadcom.com/blog/wireless-technology/the-5-ghz-band-of-spectrum-where-wi-fi-roams-free/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/the-5-ghz-band-of-spectrum-where-wi-fi-roams-free/</guid>
         <pubDate>January 17, 2012</pubDate>
         <description>The market for 802.11ac Wi-Fi -- which Broadcom has branded 5G WiFi -- is approaching a tipping point. It's cropping up in hundreds of new consumer devices this year, including home networking routers, set-top boxes, smartphones and more.ABI Research showed that global 802.11ac Wi-Fi-equipped access points, routers and gateways shipments surpassed 139 million last year. We've talked about the many reasons why 5G WiFi makes for such a great consumer experience on these devices, including faster throughput and wider range.Yet most consumers don't know the reasons for such improvements.We'd like to say it's because of Broadcom's engineering prowess, but that's just a piece of the puzzle.In fact, it's partly because 5G WiFi devices are designed to operate on an entirely different frequency of the wireless spectrum, compared with previous generations of Wi-Fi. Most Wi-Fi networks today reside on what the tech-geeks call the 2.4 GHz band.This represents a narrow swathe of spectrum available for wireless transmissions.Since this band is open to everyone, it is a crowded slice of the wireless spectrum. That means that devices operating with 2.4 GHz Wi-Fi also compete with Bluetooth devices, microwave ovens, cordless phones and baby monitors, all of which also use the same band.This inherently causes interference that leads to video buffering or slow downloads when these other devices are operational. The second challenge with the 2.4 GHz band is that it is relatively narrow.This means that neighboring Wi-Fi networks will more often than not interfere with each other, or will need to share the air time.This, in turn, implies that these networks (and their client devices, such as smartphones, streaming boxes, tablets) get nowhere close to their promised peak speeds. Since I'm an engineer.I wanted to show a good example of how this might play out in the real world.In my apartment, I</description>
      </item>
      <item>
         <title>NFC Ready for Mainstream Adoption with New Combo Chip</title>
         <link>https://www.broadcom.com/blog/wireless-technology/nfc-ready-for-mainstream-adoption-with-new-combo-chip/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/nfc-ready-for-mainstream-adoption-with-new-combo-chip/</guid>
         <pubDate>December 11, 2012</pubDate>
         <description>The concept of mobile payments as well as the technology that makes it possible has been around for a while now.But just like the early days of Wi-Fi and Bluetooth technologies, Near Field Communications technology has been slow to attract widespread adoption. Thats starting to change.In recent months, analysts have started feeling bullish again around NFCs potential but not just because of mobile payments.Theyre especially interested on an emerging ecosystem of use cases that could employ NFC technology. By innovating around NFC technology, Broadcom is helping to open doors to new use cases by making it possible for a growing number of products from mobile devices to home electronics to use the technology as a springboard for added features and services. Today, Broadcom is unveiling its latest NFC portfolio, which includes the industry's first chip to bring four proven technologies NFC, Bluetooth 4.0, Wi-Fi and FM radio - into a single die. The quad combo chip - BCM43341 - could help spur NFC adoption by helping smartphone makers launch enabled devices cheaper and faster than ever before.For the higher-end of the market, Broadcom offers a single card combining its 5G WiFi combo chip with NFC. Last month, Google announced the selection of Broadcom's open NFC software stack for all Android-based devices, including the new Nexus 10 tablet and Nexus 4 smartphone, and Nintendo announced the use of Broadcom's NFC technology in the new Wii U game console. A Market on the Rise According to Forrester Research, more than 100 million NFC-enabled devices will be shipped by year's end ABI Research sees some 800 million NFC-enabled devices on the market by 2016 and, of those, nearly 25 percent will be consumer electronic such as TVs, game consoles and tablet PCs.More than half will be smartphones. We see NFC as merely a</description>
      </item>
      <item>
         <title>Broadcom's Latest GPS Tech Zooms in on Geofencing</title>
         <link>https://www.broadcom.com/blog/wireless-technology/ahead-of-mobile-world-congress-broadcoms-latest-gps-tech-zooms-in-on-geofencing/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/ahead-of-mobile-world-congress-broadcoms-latest-gps-tech-zooms-in-on-geofencing/</guid>
         <pubDate>February 19, 2013</pubDate>
         <description>The idea of checking-in to a location via smartphone and social network isnt as wild of a thought as it once seemed.Through a virtual check-in, people are making real-life connections, tapping others for recommendations on places to visit or to eat and even stirring up a bit of amicable travel-envy among online friends. Now that people are starting to become more comfortable with the check-in and, more importantly, are recognizing the value of it, Broadcom is introducing technology that is set to change the way mobile devices interact with the places users roam. In February, Broadcom unveiled the BCM47521, the industrys first Global Navigation Satellite System (GNSS), which is based on an architecture that not only opens new doors to location-based mobile apps but does so without draining the devices battery.The software on the chip taps into a concept known as geofencing, a virtual perimeter of a physical location for the purpose of alerting the user when there is an entry to or exit from the geofence. With traditional systems, monitoring a user's location with respect to a geofence would quickly drain the device's battery.Broadcom's technology is able to intelligently monitor the user's location as a background task, consuming less power and extending battery life for a better user experience. Geofencing is Just the Beginning Simply sharing your locale (the check-in) is just scratching the surface of what location-based apps can do for consumers and the businesses who want to reach them.Consider this example: the app for your neighborhood coffee shop automatically notifies you about a new blend, offers you a 2-for-1 coupon or even lets you know that youre two more purchases away from that free cup when you walk through the door. As the adoption of mobile devices continues to grow and cellular networks, such as those promised</description>
      </item>
      <item>
         <title>Broadcom's GPS Technology Stands Up Against Major Satellite Constellation Outage</title>
         <link>https://www.broadcom.com/blog/wireless-technology/broadcoms-gps-technology-stands-up-against-major-satellite-constellation-outage/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/broadcoms-gps-technology-stands-up-against-major-satellite-constellation-outage/</guid>
         <pubDate>April 17, 2014</pubDate>
         <description>To understand the long reach of Broadcom's innovative technologies, consider the companys ability to respond to an unprecedented event that occurred in outer space earlier this month that affected millions of GPS users. On April 1, Russias GLONASS satellite positioning system was hit by a major disruption that spanned half a day and impacted satellite navigation systems well beyond Russias borders. The satellite systems are vitally important for a range of operations from everyday smartphone users to the airplanes, ships and other dependencies that regulate the flow of goods on trade routes around the world. When GLONASS went offline, Broadcom's technology was able to detect the problem, shut down the failed satellite signal and use another satellite system as a backup in this case, BeiDou, Chinas dominant satellite constellation to ensure all positions were computed accurately and without interruption. Thats because Broadcom's BCM47531 chip tracks five different satellite systems (GPS, GLONASS, QZSS, SBAS and BeiDou), essentially providing a fail safe in situations when one satellite system goes down. Released in December, Broadcom's chip helps smartphones deliver up to twice the positioning accuracy by supporting multiple constellations. It works by comparing GPS measurements from each of the different satellite systems and checks the redundant positioning readings against each other in real-time, rejecting the ones that are skewed. With Broadcom's ability to track five different satellite systems, we can make smarter decisions with regards to satellite selection, said Frank van Diggelen, vice president of technology, GPS, in the Mobile &amp; Wireless Group at Broadcom. Because you have data from multiple satellite systems, you can compare them to each other to determine if any one of them is wrong. During the 12-hour GLONASS outage, Broadcom's multi-constellation receiver test data showed how the BCM47531 successfully identified and removed all of the bad ephemeris data</description>
      </item>
      <item>
         <title>Coexistence in the Air(waves) Key to Unlicensed Spectrum</title>
         <link>https://www.broadcom.com/blog/coexistence-in-the-airwaves-key-to-unlicensed-spectrum</link>
         <guid>https://www.broadcom.com/blog/coexistence-in-the-airwaves-key-to-unlicensed-spectrum</guid>
         <pubDate>August 11, 2015</pubDate>
         <description>As a global leader in wired and wireless communications chips, Broadcom has been a longtime champion of new technology standards. Whether its for cable broadband, cellular, Bluetooth, Ethernet or Wi-Fi the company has had a hand in helping shape the wired and wireless protocols that connect the world. The explosive growth of mobile data traffic demands it. In fact, it's projected by Cisco to reach more than 24 Exabytes per month by 2019, (the equivalent of 6,079 million DVDs each month), up from 2.5 Exabytes per month last year. Although there is a much talked-about increase in cellular data traffic, Wi-Fi is perhaps at the top of the shortlist of technologies that consumers depend on every day. Today, nearly half of all Internet traffic worldwide travels over Wi-Fi connections, according to the WifiForward coalition. Wi-Fi use is expected skyrocket as consumers bring more connected devices (think: Smart Home, Internet of Things, and wearables) into their homes and workplaces. Scarce Wireless Spectrum, Wi-Fi, cellular and other wirelessly connected devices are designed to operate using airwaves called spectrum. The unlicensed spectrum, where Wi-Fi operates, has led to the development of many products that we all know and love, such as remote controls, garage door openers, baby monitors and wireless computer peripherals, just to name a few. In contrast, cellular technologies such as LTE are designed for spectrum that is owned and managed by mobile operators. These licensed spectrum technologies werent originally developed with friendliness to other technologies in mind, because operators deploying them have exclusive spectrum rights. Spectrum, both licensed and unlicensed, is a scarce resource (so much so, that a small amount was recently auctioned by the U.S.government, raising more than $45 billion). Companies are now working to deploy LTE in the unlicensed spectrum frequencies. There are different flavors of unlicensed</description>
      </item>
      <item>
         <title>New video highlights innovative solutions from Broadcom’s product portfolio</title>
         <link>https://www.broadcom.com/blog/new-video-highlights-innovative-solutions-from-broadcom-s-product-portfolio</link>
         <guid>https://www.broadcom.com/blog/new-video-highlights-innovative-solutions-from-broadcom-s-product-portfolio</guid>
         <pubDate>July 31, 2017</pubDate>
         <description>LimelightPlayerUtil.initEmbed('limelight_player_745386');


 



Take a look at the many ways Broadcom products are enhancing the lives of people around the globe. This new video shows that 50 years of innovation is more than just a pedigree at Broadcom – this work and these products have laid the very foundation for the leading-edge technological solutions we’ll be using tomorrow.

Content-wise, the video highlights ongoing successes in a broad portfolio of differentiated products in the four primary markets Broadcom serves: Wired Infrastructure, Wireless Communications, Enterprise Storage, and Industrial &amp; Other. A closer look at the individual shots reveals product applications in the wireless, automotive, flash storage, cloud, alternative energy, data center and connected home business segments, among others.

Just press play, above, to watch the video.


</description>
      </item>
      <item>
         <title>Review: Powerline Tech Enhances Plug-and-Play Connected Home Experience</title>
         <link>https://www.broadcom.com/blog/review-powerline-technology-enhances-plug-and-play-connected-ho</link>
         <guid>https://www.broadcom.com/blog/review-powerline-technology-enhances-plug-and-play-connected-ho</guid>
         <pubDate>February 21, 2012</pubDate>
         <description>The idea of setting up a &quot;connected home&quot; network may sound intimidating - but it doesn't have to be.In fact, Powerline Networking technology actually uses existing (and pervasive!) home electricity (think wall sockets) to boost in-home network coverage for an easier plug-n-play connected home experience.Its a low cost and easy way to connect devices and deliver content throughout the home. Recently, SmallNetBuilder reviewed a Broadcom-based Cisco Linskys Powerline AV adapter and highlighted the performance and throughput, saying you'll get the comfort of knowing that those bits are probably flowing a bit faster through your home or apartment's power lines. See comparisons and full review here. Whats Inside? The devices tested by SmallNetBuilder feature Broadcom's switch and Ethernet PHY technology as well as a Powerline Communications chipset.Designed in 40 nm, Broadcom's powerline technology is HomePlug certified and achieves high integration, low power and small form factors.See more product details here. As part of its review, SmallNetBuilder conducted some performance comparisons between several HomePlugAV based adapters, including the 200 Mbps BCM60321 Powerline Networking Chipset featured in the Cisco Linksys Powerline AV kit.The tests found the Cisco Linksys Powerline AV adapters to have the highest performance in its class, with a 30% boost over any other existing 200 Mbps solution tested.It also found that when adapters are placed in non adjacent rooms, Broadcom's powerline technology has equivalent performance to the 500 Mbps solutions tested at this time, confirming the claim that more expensive solutions will not necessarily provide better performance at moderately distant locations. A Piece of the Puzzle Broadcom's powerline technology (HomePlug AV) is part of Broadcom's home networking standard portfolio.Other key elements include Ethernet, Wi-Fi, MoCA, IEEE P1905 and DLNA.By supporting all key networking standards, Broadcom supports ever-increasing demands for additional bandwidth and multiple HD video streams for a more reliable</description>
      </item>
      <item>
         <title>Demystifying the Data Center, Part 2: The Potential of Power Reduction</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/demystifying-the-data-center-part-2-the-potential-of-power-reduction/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/demystifying-the-data-center-part-2-the-potential-of-power-reduction/</guid>
         <pubDate>November 12, 2012</pubDate>
         <description>Environmentally friendly alternatives like electric vehicles, eco-friendly modular homes, and alternative energy sources are in a race for sustainability as we attempt to slow down the cumulative effects of pollution, the thirst for technology and its demands on power consumption. Technology can be one of the biggest contributors to greenhouse gas emissions with peoples dependence on computers and mobile devices taxing data centers and the networks connecting them.Its a complex equation: How do we balance performance and power with energy needs and resource demands? Greening technologies across the data center has become an industry itself, with a report from research analyst group Pike Research showing that the worldwide market for green data centers are set to grow to $45 billion in 2016, up from $17 billion in 2012. To help demystify the process for organizations embarking on data center transformation, The Green Grid created an Academy Course for its Data Center Maturity Model that trains data center managers and non-IT professionals on how to measure a data centers energy efficiency. Broadcom is a longtime member of the Green Grid organization, which educates and informs companies on sustainable practices in the IT and communications industries.Its energy efficient networking technologies effect change well beyond the data center through lowering operating power of network equipment from 70 percent to 95 percent during periods of low link utilization. Ive written about the importance of power consumption in the data center and Broadcom's efforts to green up the technology we all depend on.But whats the payoff? For starters: Lowering data center power use translates to potential savings of CO2 emissions up to 3.5 million metric tons (based on ~5 TWh/year) and the following equivalencies according to estimates from the Environmental Protection Agency.The stats outlined below are based on the full deployment of energy efficient Ethernet</description>
      </item>
      <item>
         <title>Shining Light on Black Holes in the Data Center</title>
         <link>https://www.broadcom.com/blog/shining-light-on-black-holes-in-the-data-center</link>
         <guid>https://www.broadcom.com/blog/shining-light-on-black-holes-in-the-data-center</guid>
         <pubDate>September 6, 2016</pubDate>
         <description>The audiences who flocked to the hit film The Theory of Everything were probably most interested in the love story between physicist Stephen Hawking and his wife, but a major subplot was a fascinating discussion about black holes that has a unique relevance today. For scientists like Hawking and even to some members of the general public black holes are a place in space where gravity pulls so much that even light can not get out. If something disappears into a black hole like a dying star, for example its never seen again. Thats why those running massively scalable data centers use the same term to describe a growing problem in network computing. Heres what happens: data traffic passes back and forth through data centers in packets. Specifically, packets move from whats known as the spine layer of the network to the top-of-rack (TOR) or leaf layer and then to the host. As with any journey, though, things can go wrong. If a network switch encounters a transient condition, for example a configuration error, the packets travels may start to look like an old game of handball, bouncing back and forth between the TOR and the spine until it basically gives up. Thats when applications fail to deliver on your expectations where the packet has been black-holed. This is a big concern for any organization operating an enterprise or megascale data center, especially as theyre investing in compute models (such as hosting public cloud environments) or advanced approaches to deliver applications based on Software-Defined Networking. As various overlay technologies are introduced into data centers, some complexity is a natural result, which makes the propensity for things like black holes ever greater. Black holes could mean not only interruptions to important digital transactions think of a financial services trading desk as</description>
      </item>
      <item>
         <title>FCC Decision Paves the Way for Broadband Innovation</title>
         <link>https://www.broadcom.com/blog/wireless-technology/fcc-decision-paves-the-way-for-broadband-innovation/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/fcc-decision-paves-the-way-for-broadband-innovation/</guid>
         <pubDate>May 19, 2014</pubDate>
         <description>Last week, there was an important development for the wireless broadband industry one thats set to have a lasting effect on the way that consumers smartphones, tablets and other mobile devices get connected to cellular and Wi-Fi networks. The Federal Communications Commission (FCC) the government body that oversees the airwaves that enable us to make phone calls, listen to the radio and connect to our favorite cloud-based applications issued an order that lays the foundation for a long-awaited new home for mobile broadband. There are not many open frequencies available to accommodate consumers insatiable demand for more wireless services and demand is just one piece of the puzzle. Mobile data traffic is expected to increase eightfold by 2018, statistics show. So Congress and the FCC have worked together to incentivize TV broadcasters whove made the transition from analog-to-digital to return some of their frequencies so-called TV white spaces for use by wireless consumers. Broadcasters could benefit because the FCC plans to compensate them with shared proceeds from spectrum auctions.Wireless carriers could benefit because they would gain more spectrum to offer faster services to subscribers.Taxpayers could benefit because other auction proceeds would go toward a public safety broadband network and reduce debt.Perhaps most impactful is the consumer win that would come from the FCC permitting unused or underused spectrum for unlicensed technologies, such as Wi-Fi. As a global leader in wired and wireless communication semiconductors for devices using both licensed and unlicensed spectrum, Broadcom believes that the FCC struck the right balance with its recent order. It paves the way for wireless carriers to enjoy the benefits of these untapped, high-value airwaves while preserving some unlicensed spectrum for broadband Wi-Fi. This low frequency spectrum in the 600 MHz band is critical to licensed and unlicensed broadband deployment, areas where Broadcom continues</description>
      </item>
      <item>
         <title>Viral Space Video Demonstrates Power of Broadcom's GPS Tech</title>
         <link>https://www.broadcom.com/blog/wireless-technology/viral-space-video-demonstrates-power-of-broadcoms-gps-tech/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/viral-space-video-demonstrates-power-of-broadcoms-gps-tech/</guid>
         <pubDate>October 19, 2015</pubDate>
         <description>Thanks to the power of smartphones, most consumers today already have an understanding of the power of GPS technology whether its helping them navigate through traffic or scout a new hiking trail. Now, thanks to a video that recently went viral across social media, people are starting to discover that GPS can provide much more information than just turn-of-the-wheel directions.The video, shot on a GoPro camera as part of a weather balloon experiment, illustrated how GPS technology was able to log not only how far the balloon traveled but also how high and how fast. The balloon and its payload soared nearly 100,000 feet into the stratosphere and captured some stunning aerial shots of the Grand Canyon before reaching atmosphere so thin that the helium balloon burst and it parachuted back to Earth. The video of the balloon's space flight, which logged more than 6 million views and was featured in countless news stories, was captured by pairing the high-def camera with a GPS-enabled smartphone that contained a Broadcom GNSS receiver. Broadcom engineers provided the GPS-enabled phone and worked with Stanford University researchers to undertake this balloon experiment.The data logged by the GPS chip (see picture below) shows the three-dimensional path and velocity of the balloon, its ultimate height (98,816 feet) and the acceleration as the balloon burst and the device fell back to Earth (1g). The entire payload landed in the desert, out of cell coverage, and stayed there for almost two years until it was found by a hiker and returned to the research team.Not only did the Samsung Galaxy smartphone survive the flight, and record all the GPS data, but it still works, and is on display at the GPS engineering group &quot;wall of fame&quot; at a Broadcom office in Northern California. In the future, the phone</description>
      </item>
      <item>
         <title>Broadcom GNSS Chip Proves Successful Recovery of First Operational Galileo Satellite</title>
         <link>https://www.broadcom.com/blog/wireless-technology/broadcom-gnss-chip-proves-successful-recovery-of-first-operational-galileo-satellite/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/broadcom-gnss-chip-proves-successful-recovery-of-first-operational-galileo-satellite/</guid>
         <pubDate>December 15, 2014</pubDate>
         <description>When it comes to space exploration especially launching a new satellite constellation that cost millions of dollars and countless man-hours theres always a risk that things wont go as planned.The same is true when youre the technology company designing a chip thats intended to be compatible with that same constellation. Thats why a recent Mission Accomplished moment for Broadcom was so significant. [caption id=&quot;attachment_13710&quot; align=&quot;alignright&quot; width=&quot;247&quot;] Galileo satellite revised orbit.Photo courtesy of GPS World.[/caption] Earlier this month, Broadcom announced the BCM4774, the industrys first GNSS location hub to support Galileo, a global satellite system built by the European Union.With Galileo support, mobile devices and the consumers who use them are set to see significant benefits, such as more accurate positioning and faster time-to-first-fix. But two of the planned 24 Galileo satellites, which launched together on August 22, got off to a pretty rocky start this past summer.They were pinned to an exaggerated elliptical orbit that undershot its intended cruising altitude of 23,222 kilometers above the Earth. For the non space-geeks, it basically meant that the satellites that had been in the works for more than 15 years were non-functional. They were in an orbit, but not the intended one, said Frank van Diggelen, vice president of GPS technology at Broadcom.If its too elliptical, it isnt good for the satellite signals and GPS receivers on the ground. Through a series of deft maneuvers, aerospace engineers from the European Space Agency (ESA) got the satellites back on track.Over a two-week period, they worked to nudge the satellites closer to their intended orbits.One of the satellites was deemed &quot;recovered&quot; and fully transmitting navigation symbols in early December. [cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_left&quot;] That brings us to the critical testing of the BCM4774. The tests were successful and, as such, provided the first confirmation that the</description>
      </item>
      <item>
         <title>Gigabit broadband access drives tomorrows broadband applications</title>
         <link>https://www.broadcom.com/techblogs/broadcomblogs/gigabit-broadband-access-drives-tomorrows-broadband-applications</link>
         <guid>https://www.broadcom.com/techblogs/broadcomblogs/gigabit-broadband-access-drives-tomorrows-broadband-applications</guid>
         <pubDate>June 7, 2017</pubDate>
         <description>Global Internet traffic continues to grow by leaps and bounds. By 2020, four billion people will be using the Internet and immersed in a world where 26 billion electronic devices are connected [See Figure 1]. The amount of data demanded from these Internet-connected devices will be astronomical. Digital consumers will need faster broadband speeds to access and enjoy rich multimedia web content in real time. 2015 2020 More Internet Users 3.0 billion 4.1 billion More Devices and Connections 16.3 billion 26.3 billion More Video Viewing 70% of traffic 82% of traffic Figure 1: Global IP traffic and service adoption drivers (source: Cisco VNI 2016) Next-generation broadband access networks must be gigabit-capable in order to handle the deluge of Internet traffic and support bandwidth-intensive applications that are essential to the modern digital lifestyle, such as: High Definition video conference 4K/8K video streaming Real-time Virtual Reality (VR) Massively Multiplayer Online gaming (MMO) Remote medicine Distance learning Cloud data access Successfully supporting these bandwidth-intensive applications requires a gigabit broadband infrastructure that reliably delivers data from the core of service provider networks to the access points. By and large, the bottlenecks that inhibit consumers from accessing gigabit broadband services are in the “last mile” links between the customer premise and the operator broadband infrastructure. There are several leading technology implementations across the cable, DSL and fiber infrastructure that enable high data rate broadband in the “last mile” [See Figure 2]. Each implementation has its own technical and cost challenges. Figure 2: Leading &quot;last mile&quot; technology implementations Broadcom offers the industry’s most comprehensive portfolio of broadband access solutions addressing “last mile” challenges for global service providers. From broadband modem ICs for the customer premise to the CMTS/CCAP, DSLAM and OLT SoC platform solutions for the operator broadband infrastructure, Broadcom’s products enable global service providers to</description>
      </item>
      <item>
         <title>What Romley Means for Broadcom and the Industry</title>
         <link>https://www.broadcom.com/blog/what-romley-means-for-broadcom-and-the-industry</link>
         <guid>https://www.broadcom.com/blog/what-romley-means-for-broadcom-and-the-industry</guid>
         <pubDate>February 10, 2012</pubDate>
         <description>If youve been following recent trends in the server market, you must be as excited as we are here at Broadcom about the upcoming roll out of Intels Romley processors, and the subsequent roll out of Romley-based products from server OEMs. As the volume of network traffic, networked devices and huge amounts of data continue to ramp at alarming rates, current server and I/O architectures are under the gun to deliver. With increasing demands for server virtualization, cloud computing, and I/O intensive applications, the need for greater processing capabilities in enterprise networks and data centers is paramount. New technologies being deployed to meet these needs include 10 Gigabit Ethernet (10GbE) server networking solutions; NIC Partitioning, Data Center Bridging/Lossless Ethernet, Fibre Channel-over Ethernet (FCoE), iSCSI (over Lossless Ethernet), and last but certainly not least, Modular/Flexible LAN-on-Motherboard (LOM)/Daughter Cards. The trend toward server virtualization has resulted in the deployment of multiple network connections per server.Given the diversity of traffic that a converged 10GbE network can now handle, some 10GbE adapters, including Broadcom's (shameless plug #1), offer NIC partitioning or separate virtual fabrics that enable a single 10GbE port to be divided into a number of separate ports.Each of these partitions needs multiple queues behind them that allow the controller to steer traffic into separate queues based on Ethernet addresses and other header information (i.e.a hardware-managed 4-Tupple HASH, for those of you who really like details).This is imperative to spread traffic processing across the many processor cores that will soon be available with Romley-based servers.Its also a unique capability found on Broadcom 10GbE controllers (shameless plug #2). With the introduction of Romley-based servers over the next few months, we look forward to seeing Broadcom networking technology widely deployed aligning high-performance Romley processing with high-performance networking and storage. Want to learn more? Read this</description>
      </item>
      <item>
         <title>Broadcom at Interop: Knowledge-Base and Multi-Core Processors Complete Broadcom Portfolio</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/broadcom-at-interop-knowledge-base-and-multi-core-processors-complete-broadcom-portfolio/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/broadcom-at-interop-knowledge-base-and-multi-core-processors-complete-broadcom-portfolio/</guid>
         <pubDate>May 9, 2012</pubDate>
         <description>This week at Interop 2012, were thrilled to display products from our newly formed Processor &amp; Wireless Infrastructure line of business, a result of our recent acquisition of NetLogic Microsystems. The addition of leading knowledge-base and multi-core embedded processor technology to our growing Infrastructure &amp; Networking portfolio extends Broadcom's addressable market and provides our customer base with a complete end-to-end solution. Combining bandwidth, connectivity and intelligence into one seamlessly integrated platform solution results in reduced integration costs, enhanced system performance and lower execution risks for our customers. Knowledge base and multi-core processors are benefiting from the same drivers fueling growth in Ethernet switch, including LTE build outs, cloud computing, the explosive growth in mobile traffic and ever-increasing use of video.According to the Cisco Virtual Networking Index, video transport will represent 90 percent of all Internet traffic by 2015. Transporting video from the user through the core of the network is no small task.It requires requiring a great deal of processing power.Anytime a video crosses the Internet, chances are our latest processors are managing the heavy lifting. In fact, we estimate that 98.99 percent of all Internet traffic now crosses a Broadcom chip. To see exactly what NetLogic has brought to the Broadcom party, come see the complete line of NETL7 KBP, XLP, XLR and XLS processors in the Broadcom booth this week at Interop. You can follow our news from the show by following us on Twitter or visiting our website. Full Coverage: Broadcom at Interop 2012 Broadcom at Interop: Power Consumption Technology Plays Important Role Broadcom at Interop: Energy Efficient Ethernet is Good for the Planet Technology Moving at the Speed of Life: Broadcom Enables Massive Network Scalability Enterprise 2.0: Broadcom puts Network Managers in the Fast Lane Broadcom at Interop: Next-Generation Data Centers Shift into High Gear Broadcom</description>
      </item>
      <item>
         <title>BYOD: Facing the Challenges When You Bring Your Own Device to Work</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/byod-facing-the-challenges-when-you-bring-your-own-device-to-work/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/byod-facing-the-challenges-when-you-bring-your-own-device-to-work/</guid>
         <pubDate>January 3, 2013</pubDate>
         <description>Youve seen them around town in the coffee shops, on the train and, yes, even at home.Theyre the people who carry two smartphones around one for work and one for personal use.Others tote both a smartphone and a tablet, and can't do without either. The two is better than one approach has become somewhat of a necessary evil as a growing number of companies are faced with unprecedented security concerns around mobile access to their sensitive files and databases.Likewise for workers, carrying two devices has been the best way to keep the IT department from sniffing around personal mobile matters from text messages and social media updates to Instagram photos and gaming apps. But a shift is on the horizon and many companies that have been side-stepping the concerns that come with a concept called BYOD, or Bring Your Own Device, are fast discovering that they can no longer ignore it.The idea of carrying around a second smartphone for secure access to company files and email is something that workers accepted - but never really liked. For IT departments, the problem has only intensified as employees connect their own devices such as tablet PCs into the workplace networks while also having access to consumer cloud software, such as online storage sites that would allow company documents to be uploaded to a consumer cloud. In the U.S.alone, 37 percent of workers are bringing their own gadgets into the workplace without formal permissions or policies in place, according to Forrester Research.Globally, that number jumps to 57 percent, according to a study of 17 global markets by research firm Ovum.Meanwhile, 18 percent of the respondents to Ovum's survey said their employer's IT department has no idea that workers are using their own gadgets, while 28 percent of those surveyed said IT managers &quot;actively</description>
      </item>
      <item>
         <title>Broadcom Takes on BYOD: It Starts in the Network</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/broadcom-takes-on-byod-it-starts-in-the-network/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/broadcom-takes-on-byod-it-starts-in-the-network/</guid>
         <pubDate>January 28, 2013</pubDate>
         <description>Gartner has called the Bring Your Own Device (BYOD) trend the single most radical shift in business computing since the PC invaded the workplace and Broadcom is gearing up for even greater momentum around BYOD in 2013 with technologies that will address the challenges head-on. For office workers, BYOD means more flexibility for the work environment, which makes them more productive while also allowing them to have a better work-life balance.But for IT departments, BYOD can be a real headache as they find themselves managing a flood of different devices, apps and platforms that each come with their own risks and benefits. IT managers are facing the monumental task of provisioning appropriate gear and protecting the companys proprietary information while also ensuring the privacy of the employee.The BYOD conundrum has only intensified as employees bring their own devices such as tablet PCs into the workplace, tap into the companys network and still maintain access to consumer cloud software and productivity apps, such as GoToMeeting, Evernote or DropBox. Its also important to note that BYOD is about more than just managing potentially risky online activities and keeping information separate and secure.Perhaps the biggest BYOD challenge for network managers is the problem of more: More devices, more network traffic, more apps and more bandwidth requirements all driven by the rapid uptake of data-rich smartphones and tablets by consumers and companies alike. The influx of mobile devices smartphones and tablets, notably are being carried into the workplace and tapping into the companys network to enable Web surfing, streaming, applications and other bandwidth-hogging online activities. While this challenge can be dealt with on some level with new company policies and procedures, Broadcom is tackling it at its core: the network. In a press release issued today, Broadcom introduced four new switch system-on-a-chip families that</description>
      </item>
      <item>
         <title>Demystifying the Data Center, Part 1:  Power Consumption Matters</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/demystifying-the-data-center-part-1-power-consumption-matters/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/demystifying-the-data-center-part-1-power-consumption-matters/</guid>
         <pubDate>October 21, 2012</pubDate>
         <description>With spiraling energy costs and increased demand for computing power comes a heightened focus on greening the data center.The topic has attracted even more attention of late after the recent New York Times investigation and multipart series about the impact of the cloud on the environment. In an effort to enlighten the public on the true cost of constant iPhone use, Facebook-scrolling and Internet-surfing habits including tallying the environmental toll exacted by the data centers run by Yahoo, Google, Apple and other Internet titans the Times conducted a year-long investigation into the power consumption of data centers. One of the most striking results of the study, which tapped the data-crunching prowess of McKinsey &amp; Company, found that data centers can waste 90 percent or more of the electricity they pull off of the grid. The Times reported: Most data centers, by design, consume vast amounts of energy in an incongruously wasteful manner, interviews and documents show.Online companies typically run their facilities at maximum capacity around the clock, whatever the demand. Though the report has received a fair amount of attention (both good and bad), the crux of the articles should not be ignored.Data center managers, enterprises and the people who depend on them should be aware of the dangers of rapidly escalating power consumption. Broadcom and the Big Picture Thats where Broadcom comes in.Aside from considering the inefficiencies of active vs.idle technologies, we take a holistic view of the data center to determine mechanisms for keeping power consumption low while maintaining maximum efficiency. Take this finding from the Times study: On average, servers only actually used 6 to 12 percent of their electricity.The remaining power was used to keep the servers idling and ready for activity.From Broadcom's perspective, servers and computers arent the only guilty parties for abusing power in</description>
      </item>
      <item>
         <title>New Consortium: Faster, More Cost-Effective Ethernet Possible at 25 / 50 Gbps</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/new-consortium-faster-more-cost-effective-ethernet-possible-at-25-50-gbps/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/new-consortium-faster-more-cost-effective-ethernet-possible-at-25-50-gbps/</guid>
         <pubDate>July 14, 2014</pubDate>
         <description>Have a cursory look around the consumer technology landscape and this much is evident: Everything is moving to the cloud. The cloud is home to data in the form of photos, videos, software applications, books, music and e-commerce, financial transactions and that is just the tip of the iceberg.This massive amount of data7.7 zettabytes by the end of 2017, according to Cisco is constantly being collected, computed, stored and called up from any device anywhere, by consumers who dont tolerate any downtime. While a boon to device-toting consumers, this great cloud migration means big challenges for data center operators such as Google, Microsoft, Amazon and Facebook. Thats because large-scale operators like these depend on a strong network that meets the growing demand of more and more people using the Web to search, shop and share on a daily basis. Building out a network to manage that scale of traffic happens according to carefully planned technology roadmaps that account for thousands of different moving parts.Because network operators must build in bulk to meet continuous demand, they have to optimize for both performance and cost. As Big Data grows ever-bigger, operators are looking to 100-gigabit-per-second (Gbps) Ethernet to get faster speeds at reduced costs.Broadcom recently announced that it was part of a new consortium of companies looking to put a new, fractional Ethernet speed on the map to meet this need. The 25 Gigabit Ethernet Consortium proposes a scalable approach that taps 25 Gbps and 50 Gbps Ethernet links to improve efficiency in cloud-scale data centers.This new group which is comprised of Broadcom, Google, Microsoft, Arista Networks and Mellanox Technologies has made available to the industry a specification that enables large networks to run over a single-lane 25 Gbps or dual-lane 50 Gbps Ethernet link protocol, with the end-goal of scaling up</description>
      </item>
      <item>
         <title>Broadcom at Computex: Unleashing the Power of 5G WiFi</title>
         <link>https://www.broadcom.com/blog/wireless-technology/broadcom-at-computex-unleashing-the-power-of-5g-wifi/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/broadcom-at-computex-unleashing-the-power-of-5g-wifi/</guid>
         <pubDate>June 5, 2012</pubDate>
         <description>Todays networks are being taxed more than ever by the influx of bandwidth-hungry devices, from smartphones and tablet PCs in the workplace to DVRs and gaming consoles in the home.Wireless technology, in particular, is facing a growing set of requirements to satisfy users. Broadcom has been at the forefront of Wi-Fis rapid evolution, empowering service providers with the technology they needed to manage the consumer demands for access to data-heavy content, such as video.Earlier this year, Broadcom solidified its leadership position in Wi-Fi technology by launching the first 802.11ac, or 5G WiFi, chips.Since then, partners such as NETGEAR, Buffalo Technology and Belkin have started integrating the technology into routers and gateways that are now making their way on to retail shelves. But Broadcom hasnt stopped there. Today, at Computex in Taipei, Taiwan, Broadcom introduced new, highly integrated SoCs designed to unlock the full potential of 5G WiFi networking for home gateways and SMB access points, network attached storage (NAS) boxes and other devices. These new products are comprised of two families: the StrataGX series for SMBs and NAS devices, and the BCM4708x series for home networking applications.These new Broadcom SoCs are the industrys first to combine a high performance processor, Gigabit Ethernet (GbE) switch, GbE physical layer transceivers (PHYs), USB 3.0 and traffic accelerators all on a single chip. Broadcom's 5G WiFi vision is to ensure that everyone sees faster and more reliable streaming of digital content, gets quicker synching in the cloud, and enjoys the simultaneous connection of wireless devices to home and enterprise networks.The 5G WiFi ecosystem makes bandwidth available to the masses. What will you do with it? Related Posts: 5G WiFi: Introducing a Wi-Fi Powerful Enough to Handle Next-Gen Devices and Demands 5G WiFi Grows: Belkin Adds to Lineup of Next-Gen Wireless Products First 5GWiFi Product</description>
      </item>
      <item>
         <title>Tiny Combo Chips Pack a Big Punch with 5G WiFi for Mobile Devices</title>
         <link>https://www.broadcom.com/blog/wireless-technology/tiny-combo-chips-pack-a-big-punch-with-5g-wifi-for-mobile-devices/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/tiny-combo-chips-pack-a-big-punch-with-5g-wifi-for-mobile-devices/</guid>
         <pubDate>July 24, 2012</pubDate>
         <description>[caption id=&quot;attachment_3604&quot; align=&quot;alignright&quot; width=&quot;320&quot;] Broadcom is taking 5G WiFi technology to the next level with the BCM4335 combo chip for the entire mobile device ecosystem.[/caption] The power of 5G WiFi takes connectivity to a new level, delivering amped-up access to HD video, streaming music and digital photos via smartphones, smart TVs and other consumer electronics. Since the unveiling of Broadcom's first 5G WiFi chip at the 2012 Consumer Electronics Show a number of companies have launched products with the 802.11ac technology, including networking products such as routers, as well as client devices like notebooks and PCs. Today, Broadcom is taking 5G WiFi technology to the next level with the launch of the BCM4335, a new combo chip designed for mobile devices, such as smartphones, tablets and ultrabooks.The BCM4335 includes a complete 5G WiFi system - along with Bluetooth 4.0 and FM radio - on a single, integrated chip. With this combo chip, Broadcom is the first company to sample 5G WiFi solutions for every major wireless product segment.The BCM4335 complements the growing ecosystem of 5G WiFi access products and will bring the full benefits of the technology to the mobile experience.It enables a more seamless and satisfying experience when streaming video, sharing files or synchronizing media libraries to smartphones, tablets and mobile PC products. Broadcom combo chips are among the most widely adopted in the wireless industry.The key to its success? The combo chip.Broadcom's engineered complete radio systems onto tiny pieces of silicon, providing device makers the flexibility to easily add the most advanced wireless capabilities to any platform. The BCM4335 is Broadcom's most advanced combo chip yet, packing an incredible amount of networking and connectivity into a chip thats small and energy efficient enough to fit into the sleekest designs.As the new chip appears in smartphones and mobile devices</description>
      </item>
      <item>
         <title>Location-Based Services: Don't Just Find the Store, Find the Best Deals</title>
         <link>https://www.broadcom.com/blog/wireless-technology/location-based-services-dont-just-find-the-store-find-the-best-deals/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/location-based-services-dont-just-find-the-store-find-the-best-deals/</guid>
         <pubDate>January 3, 2013</pubDate>
         <description>The real estate mantra location, location, location could also be said for our 21st century connected lifestyle, in which nearly three-quarters of smartphone owners get real-time location-based information on their phones.If youve ever checked into a store on Foursquare or shared your locale on Facebook, you know how easy it is to connect your real life to your digital one. This connection will soon become much more pervasive, thanks to location-based services, one of the trends to watch at next week's International Consumer Electronics Show in Las Vegas. For years, Broadcom has been working behind the scenes to implement and improve the technologies that make these sorts of services possible.Broadcom has already integrated global positioning systems into smartphone combo chips and has synced phones with satellites for improved accuracy.Now, the company is working on how to use those technologies to help connect companies and consumers. Location-based technology uses a combination of GPS chips (the very same ones that provide turn-by-turn directions and map apps) embedded in phones and tablets, the location of other phones nearby and data from Wi-Fi hotspots to &quot;triangulate&quot; the user with pinpoint accuracy.The location is then detected by proximity sensors that interact with a mobile device. Its a series of passive communications that doesnt require action on the users part and works just as well indoors as it does outside.Best of all, it can yield powerful connections in real-time. For example, nearby friends may get an alert letting them know you are close enough to meet up.Or, you can navigate a museum without a guide, while getting useful info along the way.Another practical application: Keeping track of family members as everyone explores different corners of the mall. The most important application proposed thus far is in retail, where shopping deals and personal services will be sent</description>
      </item>
      <item>
         <title>Report: Broadcom Ranked No. 1 in Wireless; Integration Seen as Key</title>
         <link>https://www.broadcom.com/blog/wireless-technology/report-broadcom-ranked-no-1-in-wireless-integration-seen-as-key/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/report-broadcom-ranked-no-1-in-wireless-integration-seen-as-key/</guid>
         <pubDate>March 11, 2013</pubDate>
         <description>Thanks to Broadcom, in part, wireless technologies such as Bluetooth and Wi-Fi have become household names. The company was recently given a nod for its market leadership in each of those well-known wireless technologies by Londons ABI research. Broadcom ranked No.1 in three Competitive Assessments conducted by the researcher, namely wireless connectivity, Bluetooth and Wi-Fi integrated circuits (ICs). From the ABI release: Broadcom is the market leader for wireless connectivity ICs, with by far the largest market share.It has had particular success with media tablets and successive wireless connectivity combo ICs, used predominantly in smartphones from handset vendors, such Apple, Samsung, LG, HTC, Nokia, and many more. Qualcomm came in second for each of the three categories, ABIs research showed. One of the big differentiators for Broadcom is its wide product portfolio, which enables the company to seamlessly combine todays most sought-after wireless capabilities into a single, low-power systems-on-a-chip (SoC). As more devices have embraced two or more short-range wireless technologies it has been those suppliers that have been able to integrate Bluetooth, Wi-Fi, GPS, NFC, FM, etc., such as Broadcom, that have grabbed market share from competitors and become market leaders, said ABI practice director Peter Cooney. Broadcom is a leader in wireless connectivity for consumer electronics, but isnt content to rest on its laurels.The company had a big market breaktrhough last year, when it unveiled a quad combo chip (which includes Near Field Communication, Bluetooth 4.0, Wi-Fi and FM radio).Broadcom is also driving new innovations that are set to change how consumers interact with their mobile devices, including location-based technology like geofencing, Wi-Fi-enabled appliances in the home, next-generation 5G WiFi and wireless media sharing between devices. Related: NFC Ready for Mainstream Adoption with New Combo Chip From Mobile World Congress: 5G WiFi Ecosystem Grows with Arrival of HTC</description>
      </item>
      <item>
         <title>Barcelona Bound: Broadcom Heads to Mobile World Congress 2014</title>
         <link>https://www.broadcom.com/blog/wireless-technology/barcelona-bound-broadcom-heads-to-mobile-world-congress-2014/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/barcelona-bound-broadcom-heads-to-mobile-world-congress-2014/</guid>
         <pubDate>February 21, 2014</pubDate>
         <description>The rising popularity of mobile devices  everything from smartphones and tablets to the new wearable fitness devices  has put the spotlight on Barcelona this week, where the annual Mobile World Congress tradeshow kicks off today.With some 72,000 attendees and 1,700 exhibitors from 79 countries expected, the show has become the biggest showcase for all things mobile.

Of course, its not just the devices that drive the excitement around mobile technology.At the Broadcom booth this week, visitors will learn about all of the cutting edge technologies that will power the next generation of these devices, enabling them to provide more features and services.

Broadcom's joining the conversation by talking up its new turnkey LTE platform that will bring faster LTE data connections to affordable smartphones in emerging markets, and the first global location chip for wearables, a development that will substantially improve the accuracy of such devices, such as those that measure the distance traveled by a runner.[cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_right&quot;]

Wearables have dominated trend stories leading up to the show, as have articles about how companies such as Broadcom are creating an ecosystem that will provide them with power, connectivity and functionality.

Right in step, the organizers of MWC are holding a competition to see who can log the most activity on a FitBit wearable fitness tracker.

As the show kicks off, Broadcom will be talking about all of these technologies and a few others, too. Stay tuned

From the Broadcom Newsroom:

	Broadcom Delivers First Global Location Chip for Wearables
	Broadcom and NSN Demonstrate Category 6 LTE-Advanced 300 Mbps On Live Commercial Network from Elisa
	Broadcom Announces New Turnkey LTE Platform Targeting the Growing Sub $300 Smartphone Market



 

Not heading to Barcelona? Get the latest MWC news from Broadcom on our website, or on Facebook, Twitter and the blog.</description>
      </item>
      <item>
         <title>Lewis Brewster in RCR Wireless: A Closer Look at the Top Five Emerging Connectivity Technologies in 2015 and Beyond</title>
         <link>https://www.broadcom.com/blog/wireless-technology/lewis-brewster-in-rcr-wireless-a-closer-look-at-the-top-five-emerging-connectivity-technologies-in-2015-and-beyond/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/lewis-brewster-in-rcr-wireless-a-closer-look-at-the-top-five-emerging-connectivity-technologies-in-2015-and-beyond/</guid>
         <pubDate>July 24, 2015</pubDate>
         <description>Editors Note: Broadcom experts often weigh in on popular topics on industry sites around the Web. Below is a reprint of a story that appeared in RCR Wireless in which Lewis Brewster, Vice President and General Manager of Wireless Connectivity at Broadcom, talks about the top 5 wireless technologies that will influence life in 2015 and beyond. From RCR Wireless: With the Internet of Things reaching into every aspect of modern life, analysts predict the number of connected devices to reach around 5 billion by the end of this year up 30% from 2014. By the end of the decade, that number is expected to reach nearly 40 billion, far exceeding the number of people on the planet. At the heart of the IoT are the connectivity technologies that make it possible. Lets take a closer look at the top five emerging connectivity technologies to watch in 2015 and beyond. Surf and stream with 802.11ac and 802.11ad Wi-Fi At the top of the list is the combined surf-and-stream capabilities enabled by two emerging wireless technologies 802.11ac in the 5 GHz band and 802.11ad in the 60 GHz band. With consumers streaming more and more content, being able to connect over a robust wireless connection is critical. 802.11ac Wi-Fi brings a number of new features to connectivity that werent possible just two years ago. It allows media streaming from a handset to a digital television at data rates comparable to Ethernet, high-speed data and media synchronization, and significantly improves the Wi-Fi user experience by reducing signal fading and lost connections while increasing range. Add 22 multiple-input/multiple-output transmission capabilities to the mix and 802.11ac Wi-Fi performance becomes even more powerful. 22 MIMO can transmit or receive two data streams concurrently over two antennas, doubling throughput and enabling much faster download times. This</description>
      </item>
      <item>
         <title>Cisco readies new all-flash data centers with Broadcom® Emulex® 32G Fibre Channel networking</title>
         <link>https://www.broadcom.com/blog/cisco-readies-data-centers-for-the-new-wave-of-all-flash-arrays-</link>
         <guid>https://www.broadcom.com/blog/cisco-readies-data-centers-for-the-new-wave-of-all-flash-arrays-</guid>
         <pubDate>April 11, 2017</pubDate>
         <description>Solid-state storage technology is a key driver of datacenter performance improvements, changing the economics of datacenter investments and demanding more performance from storage networks. Enterprises that deploy any form of solid-state drive (SSD) technology have experienced significant application performance improvements. Many of these all-flash array (AFA) deployments are in SAN environments, which raises storage networking bandwidth demand. At the same time, new generation multi-core servers with PCIe 3.0 technology deliver 64 Gb/s performance on a standard PCIe Gen 3 x8 slot. In this new all-flash world, a faster network is an essential piece of a high-performance datacenter plan. When speaking with end users about their storage networks, we’re hearing from a growing number that they have saturated their legacy networks and need a faster solution. The applications that are consistently identified as needing higher bandwidth are database applications, VDI, and now even newer scale-out applications. Examples include single database instances running on physical hardware, multiple database instances running on physical hardware, and multiple database instances running in virtual machines (VMs). Users of these applications are generally looking for compatibility with their existing infrastructure and higher bandwidth to meet their growing storage demands. Cisco is solving the performance problem with its new MDS 9700 48-port 32Gb Fibre Channel Switching Module that enables customers to easily upgrade their existing MDS chassis to 32Gb without a forklift upgrade. The MDS 9700 32G Fibre Channel module enables scalability to meet the demands of growing workloads and Virtual Machines with the ability to scale up to 1536 Gbps for high-speed connectivity to virtualized servers, all-flash arrays and upcoming NVMe arrays. Cisco addressed operational challenges by introducing switch native, line rate analytics for FC SAN. Cisco also provides high bandwidth storage connectivity with 32G Fibre Channel HBAs for Cisco UCS C-Series. The Emulex® 32G Fibre Channel</description>
      </item>
      <item>
         <title>Mobility enhancements for next-generation Wi-Fi</title>
         <link>https://www.broadcom.com/blog/mobility-enhancements-for-next-generation-wi-fi</link>
         <guid>https://www.broadcom.com/blog/mobility-enhancements-for-next-generation-wi-fi</guid>
         <pubDate>June 5, 2017</pubDate>
         <description>Wi-Fi continues to be one of the fastest growing segments of the wireless market: In 2015, Wi-Fi traffic was 55.2 percent of Internet traffic1 and it will be 59.1 percent of total Internet traffic in 2020. A wave of major enhancements to Wi-Fi technology is now arriving that will ensure that increasing demand and user expectations are met. In this article, we highlight some of these new features that relate to enhanced mobility. Modern Wi-Fi networks typically operate on multiple frequency bands in order to maximize capacity, and often comprise multiple Wi-Fi Access Points (APs) in order to expand coverage. A Wi-Fi device automatically switches between these different bands and APs within the network in order to maintain the best connection quality as the user moves, and to help balance the traffic load evenly across the network. In addition, users’ devices often connect to multiple different networks as they go about their day – between home and office Wi-Fi, or from the Wi-Fi hotspot at their favorite café, to a large-scale Wi-Fi network covering a broad municipal area. Multimode devices such as smartphones also switch between Wi-Fi and cellular networks, depending on coverage. In all these scenarios, users expect that the services they are using – whether they be the latest mobile apps, video calls, gaming or emerging augmented reality services – continue to work fast, fluidly and without interruption, as their connection point to a network, or the network itself, changes. Bringing Enhanced Mobility to Wi-Fi To ensure the increasing bandwidth demand and user expectations are met, a set of enhancements to Wi-Fi technology have been developed that make it easier to deploy and manage a Wi-Fi network with great mobility performance. These enhancements cover three key areas: Enhanced Wi-Fi Mobility – fast discovery, roaming and authentication Neutral Host</description>
      </item>
      <item>
         <title>Broadcom reaches shipment milestone of one million G.fast lines</title>
         <link>https://www.broadcom.com/blog/broadcom-reaches-shipment-milestone-of-one-million-g-fast-lines</link>
         <guid>https://www.broadcom.com/blog/broadcom-reaches-shipment-milestone-of-one-million-g-fast-lines</guid>
         <pubDate>June 13, 2017</pubDate>
         <description>Service providers are expanding DSL and fiber connectivity with Broadcom solutions. Today, Broadcom announces that its Central Office (CO) chipsets BCM65200 and BCM65400 as well as Consumer Premise (CPE) chipset BCM63138 have shipped in more than one million G.fast lines to date.  Echoing the success of the industry’s standardization and ongoing enhancements to G.fast technology, both of these chipsets are utilized in G.fast systems now being certified via the Broadband Forum’s (BBF) G.fast test program at the University of New Hampshire InterOperability Laboratory (UNH-IOL). Broadcom is a key contributor to the BBF IR-337 certification test specification, as well as an active participant in supporting the interoperability events that led to the certifications announced today by the BBF.

Broadcom’s G.fast solutions offer the full range of capabilities critical to the ongoing success of G.fast, including high-density vectoring, support for the new 212MHz profile, media support for both coaxial and twisted-pair environments, and the unique ability to fall back to legacy xDSL standards.

“We are very encouraged by the rapid progress of G.fast from standardization to volume deployments,” said Greg Fischer, senior vice president and general manager of the Broadband Carrier Access division. “Our strong commitment to support both volume production and the interoperability efforts today announced by the BBF should reinforce operators’ high confidence in making G.fast a cornerstone of their Gigabit broadband roadmaps.”

Broadcom G.fast solutions are ideal for a number of applications where DSL is required to extend service providers’ fiber network to the consumer. DSL system vendor partners and operators view Broadcom’s G.fast chip sets as powerful tools where a comprehensive, highly functional, and power-efficient solution is needed. It’s these attributes that make G.fast solutions from Broadcom so successful and why this significant milestone was achieved today.
</description>
      </item>
      <item>
         <title>Report: Broadcom's Home Networking Technologies Rank First for Innovation</title>
         <link>https://www.broadcom.com/blog/report-broadcoms-home-networking-technologies-rank-first-for-in</link>
         <guid>https://www.broadcom.com/blog/report-broadcoms-home-networking-technologies-rank-first-for-in</guid>
         <pubDate>January 24, 2013</pubDate>
         <description>When it comes to home networking technologies, Broadcom is an industry leader that is well-positioned to lead the hybrid wired-wireless networking space, according to an analyst report issued today by ABI Research. [caption id=&quot;attachment_285&quot; align=&quot;alignright&quot; width=&quot;210&quot;] An illustration showing the MoCA Alliance's vision of the connected home.[/caption] In the report, Broadcom was named first among the top five chipmakers for innovation and implementation of home networking technologies.That conclusion was based on a competitive analysis by ABI that looked at the top home networking standards, such as HomePlug, MoCA, Wi-Fi and others. Read ABI's full press release here. The companies were ranked on a matrix that measured innovation and implementation of technologies in 2011 and 2012, as well as market share data by total nodes shipped in 2011, the most recent full-year data available. Broadcom emerged as a leader in the vendor matrix analysis with a score of 67.7, the report showed. Broadcom finished first overall and in both the innovation and implementation categories.Broadcom has the widest presence in the wired (MoCA and HomePlug) and wireless networking markets and is well positioned to lead the hybrid networking space. While Broadcom ranked second in the market share for 2011 node shipments, ABI is bullish on the companys ability to gain traction in the future.From the report: Broadcom is on a good trajectory to capture the top position, particularly as the market matures.Broadcom's top position in the Wi-Fi market (discrete and combo solutions) also affords the company an advantage as hybrid networking increasingly becomes more commonplace. Related: DLNAs CES Mission: Premium Content on Any Device in Your Home IT World: Six Home Networking Technologies to Watch Out For Broadcom's Michael Hurlston on CES Panel: Six Wireless Technologies Youll Want to Know Connected Home Technologies: See the Enhanced In-Home Experience at CES IEEE Consumer</description>
      </item>
      <item>
         <title>Multigigabit Broadband No Longer a Pipe Dream with DOCSIS 3.1</title>
         <link>https://www.broadcom.com/blog/home-networking/multigigabit-broadband-no-longer-a-pipe-dream-with-docsis-3-1/%09</link>
         <guid>https://www.broadcom.com/blog/home-networking/multigigabit-broadband-no-longer-a-pipe-dream-with-docsis-3-1/%09</guid>
         <pubDate>August 10, 2015</pubDate>
         <description>Cable operators are gearing up for a massive broadband boost, one that will give its bandwidth-hungry customers what theyve been asking for: high-speed downloads, seamless over-the-top streaming content on multiple devices, real-time interactive gaming and whole-home services such as remote monitoring and automation. Although fiber deployments (think Verizon Fios, Google Fiber) have been offering consumers Gigabit speeds in the past year or so, such services havent been widely available in most metro areas.With DOCSIS 3.1, a number of service providers are expected to begin the rollout of Gigabit broadband services in more than 150 U.S.regions. The new standard promises broadband speeds of up to 10 gigabits per second (Gbps) on downstream links and up to 1 Gbps upstream, offering a huge leap over the average 21.2 megabits per second (Mbps) data transmission speeds available via cable modems today, according to a 2014 Measuring Broadband America report by the Federal Communications Commission. The upgrade to multigigabit cable broadband means more robust cable performance and a clear path to Ultra HD content while future-proofing the cable networks that will deliver all of that pixel-dense content to consumers homes. DOCSIS 3.1 brings nearly a 100x increase in the average data rate to the home, said Richard Nelson, senior vice president of marketing, Broadband &amp; Connectivity Group at Broadcom.This will eventually give consumers streaming content, such as Netflix, the kind of bandwidth needed to stream Ultra HD content to multiple screens and download an entire 14 GB digital movie in less than two minutes. Broadcom is on the leading edge of the DOCSIS 3.1 adoption, with DOCSIS 3.1-enabled cable chips already sampling with customers. The company made a splash in January at the Consumer Electronics Show, when it was the first to announce a system-on-a-chip (SoC) for a DOCSIS 3.1 cable modem.The BCM3390 SoC</description>
      </item>
      <item>
         <title>Broadcom's 5G WiFi: 5 Ways it Improves Your Internet Experience</title>
         <link>https://www.broadcom.com/blog/broadcoms-5g-wifi-5-ways-it-improves-your-internet-experience</link>
         <guid>https://www.broadcom.com/blog/broadcoms-5g-wifi-5-ways-it-improves-your-internet-experience</guid>
         <pubDate>January 8, 2015</pubDate>
         <description>LAS VEGAS Before I was hired to be part of Broadcom's Blog Squad at the International Consumer Electronics Show this week, I hadn't really given much thought to how much or how often an embedded semiconductor company touches our lives. I quickly discovered that Broadcom's Wi-Fi and Bluetooth chips and other connection wizardry reside in the mechanical closets of most of our gadgets. That includes phones, computers, set top boxes, game consoles, dongles -- really any electronic device that wants to send and receive information as well as the back-end networking equipment that transmits data to and from data centers. I covered many topics for Broadcom this week, from smart homes today, the future of wearables, and a show overview. But it was Broadcom's new multi-user Wi-Fi technology that interested me the most. Heres why: Bandwidth is all I really crave now. I dont care about the specs in my iPhone, Chromebook, Roku or any other gadget I own. I just want their apps to work. For that to happen, I need a robust connection at home and work that's smart enough to handle the growing number of concurrent users and devices all working together to clog the intertubes. Earlier this week, Broadcom unveiled a suite of 5G WiFi-enabled router products designed to bring 802.11ac performance to the modern home Wi-Fi router or workhorse enterprise access points so that speedier, bandwidth-busting hubs can better serve every connected device. Here are five reasons everyone should care about smarter, faster, wider-ranging and multi-user Wi-Fi: 1. It overcomes grainy video streams. I'm fortunate to have Google Fiber in my home. But the included 802.11n router is ill-suited for the task when several video streams or devices jump on the network. This is because old routers weren't meant to handle the number of devices</description>
      </item>
      <item>
         <title>Broadcom Completes NetLogic Acquisition</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/broadcom-completes-netlogic-acquisition/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/broadcom-completes-netlogic-acquisition/</guid>
         <pubDate>February 17, 2012</pubDate>
         <description>Earlier today Broadcom wrapped up its $3.7 billion acquisition of NetLogic Microsystems, a leader in high performance intelligent semiconductor solutions for next generation networks.

The acquisition, first announced in September 2011, is an important one for both companies and their customers. The combination of Broadcom and NetLogic provides a true end-to-end network infrastructure platform solution, extending Broadcom's leadership in integrating bandwidth, connectivity and cutting-edge processing technologies required to drive the next generation build-out.

NetLogic has a world-class engineering team and a strong IP portfolio.Ron Jankov, former president and CEO of NetLogic, is joining Broadcom as senior vice president and general manager, in Broadcom's Infrastructure &amp; Networking Group led by Rajiv Ramaswami. NetLogic's approximately 700 employees also are joining Broadcom.

Scott McGregor, Broadcom president and CEO, described the acquisition as a significant milestone in Broadcom's strategy to &quot;extend its communications infrastructure leadership and take advantage of the explosive growth in mobile and video traffic and the rise of cloud computing.&quot;

Read the acquisition close news release.</description>
      </item>
      <item>
         <title>A Strategic Partnership: Broadcom &amp; NetLogic (Part 3 of 3)</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/a-strategic-partnership-broadcom-netlogic-part-3-of-3/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/a-strategic-partnership-broadcom-netlogic-part-3-of-3/</guid>
         <pubDate>August 10, 2012</pubDate>
         <description>Broadcom acquired NetLogic Microsystems Inc. in May of 2012.The Santa Clara-based company was incorporated in into Broadcom's Infrasstructure and Networking Group (ING) to provide a more complete solution for mobile infrastructure  including switches, microwave backhaul and more.

In this video, I talk with Broadcom's Rajiv Ramaswami (Executive Vice President, General Manager, ING) and Ron Jankov (formerly NetLogics CEO, now Senior VP &amp; GM of Processors and Wireless Infrastructure in the ING business unit) about the perks of the new partnership, and what makes the acquisition of NetLogic  the largest in Broadcom's history  so technologically important.

Whats in store for the future of Broadcom's Infrastructure and Networking Group, now that NetLogic has come aboard? The future is one where Broadcom and NetLogic technology sit side by side, on the same piece of silicon.The goal of the partnership is to enable Broadcom to make a fully integrated system on a chip (SoC) that combines what Broadcom and NetLogic both do best.The combination allows for the design of chips that are able to run faster, use less power, take up less space and process an exponentially larger amount of data.


Watch the rest of the series:
A Strategic Partnership: Broadcom &amp; NetLogic (Part 1 of 3)
A Strategic Partnership: Broadcom &amp; NetLogic (Part 2 of 3)

</description>
      </item>
      <item>
         <title>Worlds Fastest Converged Network Adapters  Again!</title>
         <link>https://www.broadcom.com/blog/worlds-fastest-converged-network-adapters-again</link>
         <guid>https://www.broadcom.com/blog/worlds-fastest-converged-network-adapters-again</guid>
         <pubDate>May 1, 2012</pubDate>
         <description>Guest blog by Dennis Martin, President of Demartek A number of new infrastructure technologies are influencing the way next-generation datacenters are operating and performing.Consider how storage networks are being impacted by a number of technologies, such as server virtualization, cloud-based computing, media-rich applications and even the higher-performance storage offerings, such as solid-state drives. Now, more than ever, performance matters. I just spent a few days in the labs at Broadcom running independent tests of the companys latest generation 10 Gb/s dual-port converged network adapters (CNAs).Ive got a full report that Ill be releasing next week, the same week as the Interop conference.But, ahead of that, here are some initial impressions of my time in the lab. Once again, Broadcom has cranked up the performance, exceeding the already impressive results from the previous generation of adapters.With its BCM957810 adapter, Broadcom has combined many functions onto one ASIC chip, increased the performance of the onboard processors and improved the power efficiency compared to their previous product. Tests of that earlier product the BCM957712 outperformed the competition, achieving 1.7 million I/Os per second.This next-generation adapter clocked 2.5 million I/Os per second for random reads via Fiber Channel over Ethernet (FCoE), a 47 percent increase thats now 200 percent faster than the competition.Over iSCSI, the reads came in at an impressive 1.5 million IOs per second or 100% faster than the competition. Its also worth noting that the next-generation BCM957810 is a full offload CNA, meaning that it fully offloads both FCoE and iSCSI processing onto the adapter, reducing overhead on the host CPU.It also fully offloads TCP/IP processing with its TCP/IP Offload Engine (TOE), further reducing the load on the host CPU. This is just a sampling of the information that will be in my full report, which will be made available in</description>
      </item>
      <item>
         <title>Getting Smart About Knowledge-Based Processors: A Primer on KBPs</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/getting-smart-about-knowledge-based-processors-a-primer-in-kbps/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/getting-smart-about-knowledge-based-processors-a-primer-in-kbps/</guid>
         <pubDate>December 3, 2012</pubDate>
         <description>We live in a world today where people are connected around the clock.So it should come as no surprise that the number of connected devices as well as the content, apps and services we consume on them is exploding every year. The global smartphone market alone is expected to reach 1.1 billion devices by the end of 2012, a 43 percent increase from 2011, according to research firm Canalys.By 2016, that number is expected to jump to 2.5 billion. All of these connections have spurred the growth of data centers capable of handling the increased demands for bandwidth but virtual traffic jams persist.Thats where knowledge-based processors step in to help alleviate the slowdowns. At the most basic level, knowledge-based processors, or KBPs, are network processors that power search engines and accelerate speed and performance.Some of the newest KBPs, however, go beyond core functionality to hone in on improving one crucial aspect of Internet traffic: network searches. According to Rajagopal Krishnaswamy, Associate Product Line Director for Broadcom's Processors and Wireless Infrastructure group, KBPs allow data centers to handle more complex search requests by integrating associated data for search functionality and driving lower system latency. This new generation of KBPs returns the results in the same guaranteed amount of time, regardless of what youre searching for, how complex it is, he said. Today, Broadcom introduced the worlds first 28 nanometer (nm) heterogeneous KBPs, a solution that upgrades performance in routers, switches, service gateways, security appliances and mobile infrastructure equipment by integrating knowledge-based processing hardware with NetRoute search technology for faster performance at lower power.In fact, Broadcom's new KBPs serve up searches up to 24 times faster than current processors, while using less power. The 28nm KBP launch is the third product launch in recent months that tap into the technologies that Broadcom</description>
      </item>
      <item>
         <title>Why Ethernet Always Wins: Celebrating 40 Years</title>
         <link>https://www.broadcom.com/blog/why-ethernet-always-wins-celebrating-40-years</link>
         <guid>https://www.broadcom.com/blog/why-ethernet-always-wins-celebrating-40-years</guid>
         <pubDate>May 13, 2013</pubDate>
         <description>In 1973, the United States launched Skylab, its first space station, and Pink Floyd sang about the &quot;Dark Side of the Moon.&quot; The same year, a technology was born that would revolutionize computing for the next several generations.This month, we celebrate the 40th anniversary of Ethernet. Ethernet, the family of networking standards that enables computers to locally connect to each other, is still the ultra-strong backbone to the many networks we use every day.Its use has extended beyond the enterprise of the 1990s to encompass service provider and home networks and now it's become the fabric of next-generation data centers.And its use is ever expanding, bringing connectivity capabilities beyond the business, service provider and data center networks even to the open road. At Broadcom, were constantly looking ahead at technologies that will change the way people work and play.But every once in a while, its nice to look back, to reflect and to remember the strides that weve taken and how they've had an impact.Next week, Broadcom Co-Founder, Chief Technical Officer and Chairman Henry Samueli will speak at an event being held at the Computer History Museum in Silicon Valley, a region where Ethernet is being celebrated all month. Back in the Day Tech-biz old-timers like to reminisce about how Ethernet got its start at the infamous Xerox Palo Alto Research Center (PARC) in 1973.Back then, it relied on 10BASE5 a fat coaxial cable with extra shielding that looked something like a stiff garden hose and a slow Bus topology network that moved data at a rate slower than 10 megabits per second. [caption id=&quot;attachment_8898&quot; align=&quot;alignleft&quot; width=&quot;270&quot;] Where Ethernet was born: A scene from Xerox's Palo Alto Research Center in 1975.Photo courtesy of PARC Inc., via the Computer History Museum.[/caption] That seems tortoise-like compared to Broadcom's offerings of 100 gigabit</description>
      </item>
      <item>
         <title>Going Deep: Network Processors Tackle Security, Speed</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/going-deep-network-processors-tackle-security-speed/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/going-deep-network-processors-tackle-security-speed/</guid>
         <pubDate>June 12, 2013</pubDate>
         <description>When consumers think about the network whether a corporate network, a home network or even the mobile network they tend to measure its performance based on things like connectivity and reliability, instead of speed and security. But carrier-grade companies think differently.They worry about the vulnerability of the network and try their best to thwart hackers from compromising sensitive information stored on those networks things like customer usernames and passwords, corporate intellectual property and critical applications. Certainly, a companys reputation can take a beating when it has to tell its customers that their data has been compromised.But the bottom line suffers, as well.An average data breach in 2011 cost the targeted company $5.5 million, according to Symantec and the Ponemon Institute. Network administrators need to maintain a balancing act: Theres a definite trade-off between security and speed.The more time you spend searching for malware or malicious attacks, the more you slow the whole thing down. It doesnt have to be that way, thanks to hardware thats not only fast enough to maintain the networks reliability but also defends against attacks before they occur, while maintaining wire speeds. Today, Broadcom announced the XLP900 series of processors, a family of high-performance multi-core communications processors optimized for the unique networking performance and security needs of service providers and enterprise data centers. On the performance side, it keeps up with the heavier traffic demands on the networks with a greater than 6x improvement in the processors computational capacity.The XLP900 delivers 80 nxCPUs on a single chip.With the capability of eight connected chips, that translates into 640 nxCPUs per system.At a speed of 2 GHz, the XLP900 can compute 1.28 trillion operations per second. Read more in the Press Release But beyond sheer computational heavy lifting, the processor family also offers unprecedented levels of security for</description>
      </item>
      <item>
         <title>Meet MGBASE-T: New 2.5/5 Gbps Ethernet Standard Eases Bottlenecked Enterprise Wireless Networks</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/meet-mgbase-t-new-2-55-gbps-ethernet-standard-eases-bottlenecked-enterprise-wireless-networks/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/meet-mgbase-t-new-2-55-gbps-ethernet-standard-eases-bottlenecked-enterprise-wireless-networks/</guid>
         <pubDate>December 1, 2014</pubDate>
         <description>Today, theres a broader understanding of what Wi-Fi is and what it can do for consumers..But as IT administrators are quickly discovering, not all Wi-Fi is created equal, especially in the enterprise. Office workers are already starting to see the benefits of 802.11ac Wi-Fi connectivity for working on leading-edge smartphones and home network routers.These devices, which industry watchers call Wave 1, already line store shelves. Next year, IT administrators anticipate 802.11ac Wi-Fi will expand into a second tier (Wave 2) of products and into the enterprise, where knowledge workers are expected to reap the benefits of greater wireless range and performance in their cubicles and on-the-go. Add the BYOD (bring your own device) trend into the mix and the uptick in demand (think: smartphones, tablets, e-readers and laptops) will soon outpace the capacity of whats called the wireless access layer in the enterprise. Its evident that the backbone of these wireless access networks 1 gigabit per second Ethernet cant keep up.Most companies pushing up against the limits of 1 Gbps Ethernet are ready to make an upgrade, but their choices are limited. They can either jump up to 10 Gbps Ethernet, the next available IEEE standard, or add a second, 1 Gbps connection to double the bandwidth. Upgrading to 10 Gbps causes considerable grief for IT administrators and their bottom lines because it requires a major investment, with a complete rewiring and lots of added costs: more energy requirements, more cabling, more ports and perhaps, heftier switching capabilities.Adding a second, 1 Gbps connection is also costly and requires significant infrastructure changes. New Ethernet Spec Enter the MGBASE-T Alliance, a newly formed industry group announced today that intends to help bridge the bandwidth gap for the enterprise wireless access layer. The Alliance is uniting around one idea: to bring Multi-rate Gigabit</description>
      </item>
      <item>
         <title>Near Field Communication is a (Video) Game Changer for Wii U</title>
         <link>https://www.broadcom.com/blog/wireless-technology/near-field-communications-tech-a-game-changer-for-wii-u/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/near-field-communications-tech-a-game-changer-for-wii-u/</guid>
         <pubDate>November 19, 2012</pubDate>
         <description>[caption id=&quot;attachment_5124&quot; align=&quot;alignright&quot; width=&quot;300&quot;] Nintendo Wii U console hit store shelves Nov.18th.[/caption] The Holy Grail of video games incorporating real-life elements into virtual games is on the horizon.With the integration of Near Field Communication technology into Nintendos Wii U, which hits store shelves this month, the possibilities for new, interactive user experiences are closer than ever. Near Field Communication, or NFC, isnt new to the tech scene companies have been playing up the potential for NFC to revolutionize mobile payments for some time now.But its the use of the technology in other ways, like part of the video game experience, which has the potential to incite fresh buzz around the low-power, close range wireless radio technology.NFC promises a whole new user experience, all with a simple tap. Read the announcement here. The Wii U incorporates a slew of great technology, including the GamePad, Nintendos new controller with a 6.2 inch touchscreen.Broadcom is playing a pivotal role in the Wii U not only by enabling its wireless connectivity but also by helping the Wii U to be the first system to incorporate NFC.Coupled with dramatically enhanced dual-band WiFi technology and high-performance Bluetooth connectivity, Nintendos Wii U is creating an immersive gaming experience thats unique to each user. Nintendo has yet to reveal its specific plans for NFC, but the possibilities are virtually endless.Broadcom's long-standing partnership with Nintendo will transform how games are played and how players interact, and the incorporation of NFC has the potential to redefine the electronics landscape in gamers' living rooms.Game on! Related: IT World: Nintendo Packed Wireless Gear into Wii U for Streaming Video, Future Fun NFC World: Wii U packs Broadcom NFC NFC Times: Nintendo Releases NFC-Enabled Wii U; Broadcom Supplies NFC Chip iTers News: Broadcom Scores Big Design Wins for Nintendo Wii U Game Console</description>
      </item>
      <item>
         <title>Broadcom Tackles Battery Challenges in Smartwatches</title>
         <link>https://www.broadcom.com/blog/wireless-technology/broadcom-tackles-battery-challenges-in-smartwatches/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/broadcom-tackles-battery-challenges-in-smartwatches/</guid>
         <pubDate>February 23, 2015</pubDate>
         <description>When it comes to developing a new segment of portable electronic devices from the earliest mobile phones to this new wave of connected wearable devices theres always been one major challenge: battery life. From an engineering perspective, its among the toughest nuts to crack for device makers that are looking to design, test and sell wearable gadgets such as smartwatches, fitness tracking bands, eyeglasses or even sensor-laded clothing. When it comes to wearables, power is king, said Larry Olivas, senior director of business development, Wireless Connectivity Combos, at Broadcom. Thats why the company is taking a focused approach to address the battery issues by introducing a second-generation platform for smartwatches that significantly lowers the power, size and development costs of such devices, while adding support for new features. Today the company introduced a smartwatch reference design platform for wristband-style wearables and higher-end smartwatches. The reference design adds out-of-the box connectivity, including Wi-Fi, Bluetooth, Near Field Communication (NFC), GPS and a sensor hub, with six MEMS sensors that can measure humidity, temperature, movement and pressure, among other data points.For higher-end smartwatch designs, it includes an applications processor and modem to establish a connection over the cellular network. Its also optimized for the development of Android Wear-based wearables, which today supports an applications processor, Bluetooth, GPS and the sensor hub. Extending battery life is a critical part of improving the smartwatch experience, according to Broadcom's Olivas. Many of these next-generation devices will be powered by small lithium batteries, so every milliamp is crucial, Olivas said, noting that battery life can make or break a new entrant in the wide-open wearables market, which is expected to grow at a compound annual rate of 35 percent over the next five years, reaching 148 million units shipped in 2019, according to Business Insider Intelligence. The</description>
      </item>
      <item>
         <title>Broadcom and Android Wear Take Smart Watches to the Next Level</title>
         <link>https://www.broadcom.com/blog/wireless-technology/broadcom-and-android-wear-take-smart-watches-to-the-next-level/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/broadcom-and-android-wear-take-smart-watches-to-the-next-level/</guid>
         <pubDate>May 15, 2015</pubDate>
         <description>Google is rolling out an update to its Android Wear software for smartwatches that will not only unleash it from some of the limits of Bluetooth, but will also boost support for gesture controls. Thats welcome news for Broadcom, which has been ready to support the functionality of the new software update unofficially called Diamond since its February release of a low-power, Android Wear-optimized reference platform for smartwatches. The latest Android Wear update will enable Broadcom's technologies to shine for device makers working with our platform, said Larry Olivas, Broadcom's senior director of business development, Wireless Connectivity Combos.Its a great upgrade for consumers who have already bought an Android Wear-based device, because it can be updated over-the-air to get these features. Broadcom's reference design platform, for makers of wristband-style wearables and higher-end smartwatches, is optimized for Android Wear-based devices and sports an applications processor, Bluetooth, Wi-Fi, Near Field Communication (NFC), GPS and a sensor hub that includes a suite of MEMS-sensors for tracking motion, pressure, direction, speed, and more. Welcome, Wi-Fi With the new software release, Android Wear devices get a big upgrade: Wi-Fi.Aside from its ubiquity, Wi-Fi enables smartwatch wearers to untether a bit from their smartphones.Until now, things like Android Wears notifications or voice recognition have only worked when the watch and phone were close enough to each other to connect via Bluetooth. [cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_left&quot;] With this latest update, your phone can actually be miles away as long as your phone has data and your watch is connected to Wi-Fi, your notifications and replies will pass between them, blogger Greg Kumparak wrote in a recent TechCrunch post. But its more than just the wireless connectivity that makes this attractive.Broadcom's Bluetooth and Wi-Fi combo chips enable smartwatch makers to reduce the size of their interior circuit boards by</description>
      </item>
      <item>
         <title>Dell EMC XtremIO all-flash arrays deliver 4x performance with Broadcom® Emulex® network </title>
         <link>https://www.broadcom.com/blog/dell-emc-xtremio-all-flash-arrays-delivers-4x-performance</link>
         <guid>https://www.broadcom.com/blog/dell-emc-xtremio-all-flash-arrays-delivers-4x-performance</guid>
         <pubDate>March 24, 2017</pubDate>
         <description>Datacenters that deploy all-flash arrays want the best storage performance possible, and they are making a significant investment in order to get it. To give an all-flash array implementation justice, the network needs to be fast enough to be able keep up with its blistering performance or the only thing accomplished is moving the bottleneck from the storage array into the network. Dell EMC XtremIO arrays provide a unique scale-out, all-flash architecture, with data reduction and copy services that deliver the IOPS, bandwidth and capacity required to consolidate and support databases and analytics workloads across the data center. These powerful all-flash arrays were a natural choice for testing the impact of upgrading from an 8Gb Fibre Channel (8GFC) network to a 16GFC or 32GFC network. To assess the network’s effect on application performance, a series of tests were conducted using XtremIO all-flash arrays and PowerEdge R730 servers, connected to three different generations of Fibre Channel HBAs from Emulex and Fibre Channel switches from Brocade. The following applications and workloads were evaluated: • Database Applications – Microsoft SQL Server, Oracle Database • VM Boot Storms – Citrix, Microsoft Hyper-v and VMware • VM Storage Migration – Citrix, Hyper-v and VMware Database Applications – up to 72 percent faster: Microsoft SQL Server Decision Support Systems query times were up to 72 percent faster with the Broadcom® Emulex® 32GFC LPe32002 HBA and Brocade G620 switch, and 47 percent faster with 16GFC networks. To put that into context, during the test, the time it took to execute 22 queries to the database was reduced from 33 minutes to approximately 8 minutes for the Microsoft SQL Server workload. The Oracle Database query time was reduced by 68 percent compared to 8GFC and 41 percent compared to 16GFC. Virtual Machine Storage Migration – up to 75</description>
      </item>
      <item>
         <title>Innovation Comes Full Circle as Engineers Bring Ideas, Enthusiasm to Broadcom Technical Confab</title>
         <link>https://www.broadcom.com/blog/innovation-comes-full-circle-as-engineers-bring-ideas-enthusias</link>
         <guid>https://www.broadcom.com/blog/innovation-comes-full-circle-as-engineers-bring-ideas-enthusias</guid>
         <pubDate>June 5, 2013</pubDate>
         <description>Academics and Ted Talk-goers alike love a good brainstorm.The thought is this: Get a lot of smart people together to hash out new ideas, and surely, all that cerebral electricity is bound spark an idea that no ones had the gumption to try at least not yet. [caption id=&quot;attachment_9163&quot; align=&quot;alignright&quot; width=&quot;300&quot;] Broadcom's Henry Samueli looks over the research project of an electrical engineering graduate student at last year's competition.[/caption] Its in this same spirit that Broadcom hosts a Technical Conference near its Irvine, Calif., headquarters.The company gathers up its top engineers from around the world for a two-day-long series of lectures that culminate in awards recognizing the best and the brightest contributors, inventors and innovators among its ranks. More than just a corporate retreat, in some ways the Technical Conference is a nod to Broadcom Co-founder, Chairman and Chief Technical Officer Henry Samuelis academic roots.A Ph.D.and former University of California, Los Angeles, electrical engineering professor, Samueli who started Broadcom more than 20 years ago values higher education and the edge new thinking brings to Broadcom's engineers. With blue sky discussion topics such as The Future of Wi-Fi and Hardware without Hardware, and more earth-bound ones such as High Performance Switch Silicon for Evolving Data Center Fabrics, the annual event is sure to inspire Broadcom's engineers anew. The Technical Conference will feature a core program that supports the continuing education of future engineers, the Broadcom Foundation University Research Competition, which aims to showcase engineering research that will impact society at large. The Foundation hosts a dozen graduate-level engineering students from prestigious universities across the globe some hail from as far as Tel Aviv University in Israel and others, closer to home to present their research to experts at Broadcom and compete for cash prizes.The competitors are picked from the universities that</description>
      </item>
      <item>
         <title>Rich Nelson in Multi-Channel News: &quot;Gigabit Broadband Will Radically Alter How Consumers Apply and Interact with Internet Technology&quot;</title>
         <link>https://www.broadcom.com/blog/rich-nelson-in-multi-channel-news---gigabit-broadband-will-radically-alter-how-consumers-apply-and-interact-with-internet-technology-</link>
         <guid>https://www.broadcom.com/blog/rich-nelson-in-multi-channel-news---gigabit-broadband-will-radically-alter-how-consumers-apply-and-interact-with-internet-technology-</guid>
         <pubDate>September 9, 2015</pubDate>
         <description>Editor's Note: Broadcom experts often weigh in on popular topics on industry sites around the Web. Below is a reprint of a story that appeared in Multi-Channel News, in which Rich Nelson, Senior Vice President of Marketing, Broadband and Connectivity Group at Broadcom, talks about how cable companies are bringing Gigabit broadband to customers. From Multi-Channel News: Gigabit Internet speeds are right around the corner and about to release technological potential that has been constrained for far too long. Fueled by consumer demand for ever faster and more reliable broadband and Internet, cable operators are getting ready for mass deployment of Gigabit speeds to cable subscribers throughout the country. Capable of delivering a dramatic increases in speed, Gigabit broadband which will radically alter how consumers currently apply and interact with Internet technology, and create avenues to innovation and applications that have yet to be explored. Gigabit speeds have the potential to improve education and distance learning, close the digital divide by providing equal access to all and extend online healthcare to remote areas, all while accelerating economic development. So whats driving the demand for Gigabit broadband? Just a few short years ago, the average household had an average of two or three devices connected to the home network. Today, in the Internet of Things (IoT) era, the number of connected devices is growing at an astounding rate, resulting in a substantial increase in the average number of connections per household. In addition, the use of the Internet has evolved. Gone are the days of viewing simple web pages, consumers now leverage the Internet for streaming over-the-top video content, cloud storage, sharing high resolution images, interactive online gaming and more. Suddenly, the average U.S. Internet connection of 11.5 Mbps is no longer enough. Yet enhancing Internet speeds for consumers is only</description>
      </item>
      <item>
         <title>2012 Ethernet Tech Summit  Fingers Keep Pointing Toward New Technologies</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/2012-ethernet-tech-summit-fingers-keep-pointing-toward-new-technologies/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/2012-ethernet-tech-summit-fingers-keep-pointing-toward-new-technologies/</guid>
         <pubDate>February 27, 2012</pubDate>
         <description>In my last blog post, I talked about the upcoming launch of Romley/Sandy Bridge servers and how network partitioning is an easy and cost effective way to help maximize the value of those new solutions.This week Id like to expand further on the topic of maximizing value and ROI for new servers while sharing some key insights that I gathered during last weeks Ethernet Technology Summit in San Jose. For those of you who arent familiar with this event, the Ethernet Technology Summit is a three-day conference focused on the latest advancements in Ethernet technology (especially the migration from 1GbE to 10GbE, the emergence of 40/100 GbE, convergence, cloud computing, and virtualization).The conference features a wide range of keynote addresses, tutorials and panel discussions from experts who are heavily involved with product development, creating and implementing industry specifications, and of course, key decision makers who are helping to shape the future of the industry. Even though the terms Romley and Sandy Bridge werent mentioned heavily in the sessions I attended, it was still very clear that there are several underlying trends and dynamics in the market that are placing a spotlight on this next-generation of server technology.In particular, many questions were being raised about how future solutions will enable managers in the data center and enterprise networks to dramatically increase bandwidth and speed on their servers while simultaneously saving time, money and power consumption.There also was a lot of attention on storage (FCoE/iSCSI), security as well as the migration of 1GbE to 10GbE and how higher bandwidth platforms are on the verge of massive expansion over the next few years. I noticed a lot of attention being placed on how the transition to new devices and higher levels of bandwidth present many new challenges and opportunities for the industry. While</description>
      </item>
      <item>
         <title>Why Being Fast, Fat and Flat Can Be a Good Thing!</title>
         <link>https://www.broadcom.com/blog/why-being-fast-fat-and-flat-can-be-a-good-thing</link>
         <guid>https://www.broadcom.com/blog/why-being-fast-fat-and-flat-can-be-a-good-thing</guid>
         <pubDate>April 9, 2012</pubDate>
         <description>There's a lot discussion these days among network IT professionals about making the leap from 1 gigabit Ethernet (GbE) to 10GbE.In fact, I recently participated in a Wikibon Peer Incite discussion about this very topic.In a conversation titled &quot;The Rise of 10Gb Ethernet and the Impact of Intels Xeon E5 Family of Processors,&quot; we not only talked about the speed advantages of 10GbE but also some of the solutions (such as HP's Flex-LOM architecture) that make it easier to determine when the time is right to upgrade to faster speeds. [caption id=&quot;attachment_1715&quot; align=&quot;alignright&quot; width=&quot;300&quot;] Graphic: Click for detailed view[/caption] We also discussed how 10GbE upgrades are having a direct impact on the way data centers are being constructed.The traditional data center has three tiers: presentation, application, and database.This architecture leaves server resources in islands that are optimized for North-South networking traffic only.This means that each tier talks to its adjacent tier, but not to servers in the same tier (commonly referred to as East-West traffic). Reconfiguring workloads and adapting to changes in this kind of data center can be very labor-intensive and time consuming.It can also mean having less flexibility and incurring more operating expenses when its time to make changes. To address these challenges, we talked about how the three-tier data center is giving way to the construction of virtual data centers a structure in which all of the servers are fully connected via 10GbE in a &quot;flat&quot; network (one with fewer tiers).This not only simplifies network construction but allows any set of servers to be configured in any logical tier, as the need arises. Ultimately, this eliminates compute islands, increases flexibility and reduces operating expenses. This also underscores one of the biggest lessons learned from utility/public cloud computing architectural practices, that fast, fat, and flat networks save time</description>
      </item>
      <item>
         <title>Windows Server 2012: Even Better Through Broadcom Collaboration!</title>
         <link>https://www.broadcom.com/blog/windows-server-2012-even-better-through-broadcom-collaboration</link>
         <guid>https://www.broadcom.com/blog/windows-server-2012-even-better-through-broadcom-collaboration</guid>
         <pubDate>September 12, 2012</pubDate>
         <description>[caption id=&quot;attachment_4410&quot; align=&quot;alignleft&quot; width=&quot;222&quot;] Click to expand.Source: Microsoft Corp.[/caption] Unfortunately, I cannot take credit for the phrase, the whole is greater than the sum of the parts. This is generally attributed to Aristotle admittedly, a tough act to follow but I will borrow from it as a way of highlighting my thoughts in this blog post. With the highly anticipated release of Microsoft Windows Server 2012, the industry is anxious to see what new features will have the greatest impact on users.While this is important, I would suggest that it is more important to look at the overall impact the combined technological advancements will have on the industry. Read Broadcom's press release here. Earlier this year, we saw several important industry announcements from major server vendors promoting Xeon E5 Romley - based platforms.As part of these announcements, Broadcom released new 1GbE and 10GbE NetXtreme Converged Network Adapters supporting L2 networking and iSCSI/FCoE storage offload protocols.This was critical because it is important to maintain balanced performance in a system.There is limited benefit in improving the processing capability of the server if the overall performance is bottle-necked by older I/O technology. By increasing both the processing power and the server I/O, the overall performance of the server is improved. We are now facing another transition in the industry where three major components of the server are being advanced: The combination of Microsofts powerful new Windows Server 2012 operating system, the new Romley-based servers from tier-1 vendors such as Dell and HP, and Broadcom's new NetXtreme CNA adapters.These components are simultaneously advancing the: operating system, CPU and overall server architecture, and Network and Storage I/O. While the individual technology improvements are impressive, its the combined advanced features and tight integration that makes the value greater than the sum of the parts.The work behind</description>
      </item>
      <item>
         <title>Rajiv Ramaswami in Silicon India: &quot;NFV Can Make Networks Agile, Cost-Effective, Scalable and Secure&quot;</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/rajiv-ramaswami-in-silicon-india-nfv-can-make-networks-agile-cost-effective-scalable-and-secure/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/rajiv-ramaswami-in-silicon-india-nfv-can-make-networks-agile-cost-effective-scalable-and-secure/</guid>
         <pubDate>September 1, 2014</pubDate>
         <description>Editors Note: Broadcom experts often weigh in on popular topics on industry sites around the Web.Below is a reprint of a story that appeared in Silicon India, in which Rajiv Ramaswami, Executive Vice President and General Manager of the Infrastructure &amp; Networking group at Broadcom, talks about Network Function Virtualization. From Silicon India: As telecom providers and network operators move to a carrier cloud architecture, they are looking for cost effective solutions to virtualize workloads on industry standard servers.Virtualization has been established as a proven technology for improving capacity, management and efficiency of server and storage systems in data centers and the next logical step is to deliver virtualization to the network through what is known as Network Functions Virtualization (NFV).The NFV architecture concept is capable of virtualizing entire classes of network node functions into building blocks that may be connected, or chained together to create communication services. Like server and storage virtualization, NFV gives data center operators the flexibility to relocate network functions from dedicated appliances to industry-standard, high-volume servers, switches and storage.NFV can make networks more agile, cost-effective, scalable and secure, which enables businesses to deploy new services quickly and gain a competitive edge. NFV delivers agility by programming intelligence into the network via software as needed.Network appliances that can be delivered virtually include firewalls, session border controllers, radio access network nodes and WAN acceleration devices, just to name a few.Virtualization allows the operators to adjust the capacity and other specifications for that equipment, depending on its purpose.NFV provides scalability by allowing operators to dial up or down network capacity as demand changes.This scalability also allows operators to adjust their network architecture across multiple servers - even across multiple data centers anywhere in the world -- in a way they can't do with only physical data center assets.</description>
      </item>
      <item>
         <title>Broadcom Joins OpenPOWER Foundation, Kicks Off Plugfest at Yearly Summit</title>
         <link>https://www.broadcom.com/blog/broadcom-joins-openpower-foundation-kicks-off-plugfest-at-yearly-summit</link>
         <guid>https://www.broadcom.com/blog/broadcom-joins-openpower-foundation-kicks-off-plugfest-at-yearly-summit</guid>
         <pubDate>April 6, 2016</pubDate>
         <description>In the multi-layered world of mega-scale data centers -- which are expected to handle an exponential swell of data -- its getting harder and more costly to deliver generational gains in performance, speed, bandwidth and efficiency with each passing year. The industry has been pushed to address the issue by cracking open the layers of hardware and software that make up these massive networks -- a trend thats known as open-source networking or network disaggregation. Weve seen the fruition of open-source networking trade groups and consortia, including OpenStack, Open Compute Project, and others, which are collectively driving new technology standards that promise to change the way companies design, integrate and operate their systems. At the OpenPOWER Foundation Summit this week in San Jose, Broadcom Technologies, a Broadcom Limited company, announced that it joined a cohort of more than 200 members of the OpenPOWER Foundation, including tech titans such as IBM, Google, Samsung, NVIDIA and more. Broadcom joins a growing group of tech companies, universities and trade organizations working collaboratively to build advanced server, networking, storage and acceleration technology. The end goal: More choice, control and flexibility to developers of next-generation, hyperscale and cloud data centers. Broadcom is actively helping to broaden the ecosystem around the POWER architecture, with the end-goal of driving innovation around common standards, said Jas Tremblay, vice president of marketing in the Data Center Solutions Group at Broadcom. The OpenPOWER Foundation seeks to unlock what Rackspace calls the last black boxes in the server environment by enabling development on top of IBMs POWER-based microprocessor architecture. &quot;Broadcom's industry-standard MegaRAID storage technology is already trusted by the worlds largest data center customers, bringing this technology to OpenPOWER is a natural transition for those customers looking for the performance benefits that IBMs POWER processing offers,&quot; Tremblay said. The Foundation is</description>
      </item>
      <item>
         <title>Broadcom's 5G WiFi Makes Technology Top 100 List for 2012</title>
         <link>https://www.broadcom.com/blog/wireless-technology/broadcoms-5g-wifi-makes-technology-top-100-list-for-2012/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/broadcoms-5g-wifi-makes-technology-top-100-list-for-2012/</guid>
         <pubDate>June 21, 2012</pubDate>
         <description>R&amp;D Magazine has named Broadcom's 5G WiFi family of chips among the 2012 R&amp;D 100 award winners, naming the innovation one of the 100 most technologically significant products of the last year. Broadcom's 5G WiFi SoCs were introduced in January at CES, based on the 802.11ac IEEE standard.They were designed to replace the aging 802.11n technology and satisfy the demand for faster, more reliable Wi-Fi in the hyper-connected age of tablets, smartphones and the connected home.Since then, the technology has been launched by a number of partners, including Netgear, Buffalo Technology, Asus and Belkin, which showcased the technology at a Connected Home event in New York City this week. Photo Gallery: Belkin's Connected Home event in New York City [caption id=&quot;attachment_3001&quot; align=&quot;alignright&quot; width=&quot;300&quot;] Click on the image for an interactive graphic that explores the power of 5G WiFi[/caption] 5G WiFi delivers up to gigabit speeds of connectivity with six times more power efficiency than the previous generation of Wi-Fi.It operates on a different spectrum than 802.11n, increasing signal range and passing through more obstacles to overcome some of those dreaded deadspots.It was designed with the modern multi-tasker and family home in mind, able to connect several devices and enable multiple video streams at once, without pauses for buffering. Broadcom's 5G WiFi chip family is in excellent company on R&amp;D Magazines list alongside innovations such as energy technologies from NASA and a laser and photonic development from a laboratory at MIT. The award is celebrating its 50th anniversary and honors products from a range of disciplines including telecom, physics, software, manufacturing and biotech developed by companies, universities, research firms and government labs. Related Posts: 5G WiFi: Pioneering the New Generation of Wireless Connectivity Broadcom at Computex: 5G WiFi and Gigabit Throughput [Video] 5G WiFi: Introducing a Wi-Fi Powerful Enough to Handle</description>
      </item>
      <item>
         <title>First 5GWiFi Product Hits the Shelves</title>
         <link>https://www.broadcom.com/blog/wireless-technology/first-5gwifi-product-hits-the-shelves/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/first-5gwifi-product-hits-the-shelves/</guid>
         <pubDate>April 26, 2012</pubDate>
         <description>Wireless technology has officially turned the corner: the first 5GWiFi products are now on the shelves. Today, Netgear is announcing the availability of the R6300 WiFi Router, the first dual-band gigabit WiFi router powered by Broadcom's fifth-generation WiFi, or IEEE 802.11ac, chips.Unveiled in January at the Consumer Electronics Show, Broadcom's 5GWiFi chips are delivering faster throughput, higher capacity, broader coverage and longer battery life. For todays consumer, that translates to a more robust wireless home network, one that not only streams content and powers advanced voice and video services but also allows a greater number of devices from PCs and mobile devices to set-top boxes and gaming consoles to access it.WiFi networks are common in todays homes but increasingly, the online tasks surrounding video whether streaming a movie or conducting a video chat with a friend are demanding something more robust. Netgears 5GWiFi router, for example, has speeds of up to 1300 Mbps on 5GHz and 450 Mbps on 2.4GHz, enabling consumers to download web content from any device in the home in a fraction of the time it would take on a similar 802.11n device.And while thats important, consumers are bound to be intrigued by the power of Netgears Genie app, which unleashes networked photos, videos and music for playback on any connected devices, provides separate guest access and networks USB-connected printers so that any device can access them, even mobile devices. Its also DLNA ready and can stream to any DLNA compatible device in the house, including the latest Smart TVs, Blu-ray players, media players, game consoles, handheld devices and tablets.More importantly, the technology will allow the mobile device marketplace to continue to grow and flourish.According to a Cisco report, video currently constitutes 40-50 percent of all Internet traffic but is expected to reach 91 percent by 2015.And</description>
      </item>
      <item>
         <title>Broadcom at Mobile World Congress: A Pre-Show Sneak Peek</title>
         <link>https://www.broadcom.com/blog/wireless-technology/broadcom-at-mwc-a-pre-show-sneak-peek/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/broadcom-at-mwc-a-pre-show-sneak-peek/</guid>
         <pubDate>February 24, 2013</pubDate>
         <description>BARCELONA -- The show doesnt officially begin until tomorrow but the festivities around the Mobile World Congress conference kicked into high gear Sunday evening with Mobile Focus Global, an invitation-only event where companies gathered to showcase their technologies for a select group of journalists, bloggers, analysts and more.

Broadcom was in attendance with a prime spot near the events main entrance and an attractive demo that featured the Nintendo Wii U Game Console, which showcases Broadcom's wireless connectivity technologies.Likewise, Broadcom was also highlighting its Near Field Communication technology, utilizing smartphones and tablets to illustrate how a simple tap of a smart tag can bring up relevant information on the screen.



The crowd was eager to learn more about the technologies at the Broadcom booth, as well as the dozens of others at the show, including Broadcom partners such as Norton, Rovi, T-Mobile, the Alliance for Wireless Power and others.

These images are just a sampling of the sights from the event.Check out our Facebook Photo Album for more pictures - and be sure to follow us on the blog and across social media for regular updates from Broadcom's booth and the show floor.

Come by and see us at:

Hall 3 (Hybrid Hall)Booth #3C14Fira de Barcelona Gran Via Not heading to Barcelona? Get the latest MWC news from Broadcom and our partners by liking us on Facebook, following us on Twitter and reading the blog.

Related:

	Ahead of Mobile World Congress: Broadcom's Latest GPS Tech Zooms in on Geofencing
	Connect with Broadcom in the Mobile World Capital: Looking at Tech from the Inside-Out
</description>
      </item>
      <item>
         <title>Brian Bedrosian in Embedded Computing: &quot;The Home Automation Market Will Reach $16.4B by 2019&quot;</title>
         <link>https://www.broadcom.com/blog/wireless-technology/brian-bedrosian-in-embedded-computing-the-home-automation-market-will-reach-16-4b-by-2019/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/brian-bedrosian-in-embedded-computing-the-home-automation-market-will-reach-16-4b-by-2019/</guid>
         <pubDate>August 5, 2014</pubDate>
         <description>Editors Note: Broadcom experts often weigh in on popular topics on industry sites around the Web.Below is a reprint of a story that appeared in Embedded Computing, in which Brian Bedrosian, Senior Director of Embedded Wireless at Broadcom, talks about the future of home automation. From Embedded Computing: The evolution of the Internet of Things (IoT) is driving significant growth and opportunities in a variety of end markets segments and analysts expect the home automation market to reach $16.4B by 2019 as consumer demand for automated control and monitoring of the home continues to rise. In today's smart home, any number of devices and systems can be managed remotely including lighting, heating, air conditioning, security, and appliances such as refrigerators, washers, dryers, dishwashers, and more.Leveraging the in-home wireless network, remote control of such systems makes it easy to know that everything is running smoothly at home, manage energy usage from anywhere and control key features of home appliances. In these robust early stages of IoT market development, much of the product innovation is coming from startups with great ideas but challenged by staffing, funding and production capacity issues.In order to help these fledgling players grow and keep up with demand, smart home technology depends on a number of elements including a vibrant smart home ecosystem, powerful components, well-designed software and hardware platforms, and, most importantly, interoperability among devices. The driving force behind the development and adoption of smart home appliances is the wireless connectivity that connects smart devices to the home network for anywhere, anytime control and monitoring of home appliances.Enabled by proven technologies such as Wi-Fi, Bluetooth Smart, NFC, and powerline communications (PLC), efficient designs continue to reduce the processing and power requirements of smart appliances.These efficiencies, in turn, enable manufacturers to design, produce, and go to market with</description>
      </item>
      <item>
         <title>Have Wi-Fi, Will Travel: Wi-Fi Alliance Improves Passpoint for Better on-the-go Experiences</title>
         <link>https://www.broadcom.com/blog/wireless-technology/have-wi-fi-will-travel-wi-fi-alliance-improves-passpoint-for-better-on-the-go-experiences/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/have-wi-fi-will-travel-wi-fi-alliance-improves-passpoint-for-better-on-the-go-experiences/</guid>
         <pubDate>October 10, 2014</pubDate>
         <description>As a growing number of consumers rely on connected smartphones to manage their daily tasks both work and play the availability of widespread, robust Wi-Fi connections becomes important.To meet those demands, a growing number of managed, or service provider, Wi-Fi networks are starting to be being offered by cable companies, ISPs and even by savvy retailers and businesses. Consumers benefit because they get easy access to a reliable Wi-Fi connection wherever they roam.In exchange, businesses get the eyeballs of their most engaged customers, increase sales and have an opportunity to reinforce brand loyalty. To help expand the effort, the Wi-Fi Alliance last week updated its Wi-Fi Certified Passpoint program, a standard launched in 2012 to facilitate access to service provider Wi-Fi networks and to streamline connectivity for Wi-Fi only devices such as tablets and notebooks. Read the Wi-Fi Alliance Press Release Broadcom, a leader in Wi-Fi, was an early test-bed participant and continues to back Passpoint alongside other industry players, including chipmakers, consumer electronics companies and enterprise access point vendors. Standardizing on Passpoint is important because it provides consumers with Wi-Fi connections that are reliable, secure and easy to access.The experience suffers if the Wi-Fi signal is spotty or slow because frustrated customers may find a new place to get their work done away from the office. Providers of managed Wi-Fi networks want to cultivate loyalty and repeat visits from their customers and an easy login process can help. Among the new features of Passpoint 2.0 is that it eliminates the need for consumers to re-authenticate, or to login to a network every time they want to connect. [caption id=&quot;attachment_13472&quot; align=&quot;alignleft&quot; width=&quot;274&quot;] An infographic about the demand for Passpoint's features, courtesy of the Wi-Fi Alliance[/caption] Instead, Passpoint provides a seamless transition between hotspots as consumers go from place to place.</description>
      </item>
      <item>
         <title>Broadcom delivers cloud-scale economics with the Tomahawk II Ethernet switching chip</title>
         <link>https://www.broadcom.com/blog/broadcom-delivers-cloud-scale-economics-with-the-tomahawk-ii-ethernet-switching-chip</link>
         <guid>https://www.broadcom.com/blog/broadcom-delivers-cloud-scale-economics-with-the-tomahawk-ii-ethernet-switching-chip</guid>
         <pubDate>November 28, 2016</pubDate>
         <description>The business of technology – at its core – is the business of “What’s next?” As cloud computing, the ubiquity of mobile devices, the science of data analytics and other transformative ideas shape our digital and experiential futures, chip designers are focused on helping customers develop the “new next” by answering the call for faster speeds and increased connectivity in the silicon they’re delivering. The design teams at Broadcom are no different. Building upon the legacy of the successful Tomahawk® platform (2014) and Trident (2010) before that, Broadcom now brings to market the world’s most capable 100GE switching solution – the Tomahawk II – with the speed, bandwidth and feature set to satisfy demand for network and cloud growth requirements – at exactly the time the market needs it most. Tomahawk II delivers the right feature set for OEMs and developers Designed for cloud-scale data centers and high-performance computing (HPC) environments, Tomahawk II is the latest variation in the StrataXGS® Tomahawk family of Ethernet switches. Arriving just two years after its predecessor, Tomahawk II has double the bandwidth and resource capability of Tomahawk and 10x the bandwidth of Trident. Configurable at 64 ports 100GE or 128 ports of 50GE, Tomahawk II operates at 6.4 Tb/s with packet switch engines optimized for software defined networking (SDN). Manufactured in 16nm, it integrates 256 serdes running at more than 25Gb/s and includes large on-chip forwarding tables and packet buffer memory. By doubling the port count relative to its predecessor, Tomahawk II enables network designers to deploy next generation networks with fewer switches and interconnect links, delivering unprecedented cost and power savings for cloud-scale applications. The true measure of any first-to-market innovation, though, is its acceptance by developers wanting to quickly create and deploy products. The proven architecture of the Tomahawk line and its</description>
      </item>
      <item>
         <title>NVMe over Fabrics: What’s next for NVMe</title>
         <link>https://www.broadcom.com/blog/nvme-over-fabrics-whats-next-for-nvme</link>
         <guid>https://www.broadcom.com/blog/nvme-over-fabrics-whats-next-for-nvme</guid>
         <pubDate>December 7, 2016</pubDate>
         <description> 


						


NVMe SSDs have been around for several years and leveraged in enterprise data centers (as in-server storage) through to laptop applications. However, until now it’s been impossible to scale NVMe beyond the rack and across the data center, leaving data stranded on storage islands.  In June 2016, NVMe.org ratified the NVMe over Fabrics (NVMe-oF) standard which enables Enterprises with storage area networks (SANs) to take advantage of low-latency NVMe SSDs by using a fabric as an attach point to the host rather than using PCI Express. This approach enables NVMe to scale to potentially thousands of SSDs.

 

NVMe over Fabrics options include Fibre Channel, Ethernet (RDMA), InfiniBand and OmniPath. Broadcom was a key contributor to the NVMe-oF standard and contributed FC-NVMe Linux

Extensions.

 

NVMe over Fibre Channel is a natural choice for datacenters; it is easy to implement on existing FC networks with a simple driver update. No burdensome infrastructure changes or additional training is required.  Most significantly, data centers can leverage the lossless, high-performance Fibre Channel protocol that was built to easily scale to thousands of nodes to get low-latency connectivity for NVMe SSDs that outperforms other protocols, especially under load.

 

The Emulex-branded Gen 6 HBAs by Broadcom offer a simple way to transition to NVMe with dual-mode HBAs supporting both NVMe and SCSI drives, concurrently, which enables datacenters to transition to all-flash arrays at their own pace.

</description>
      </item>
      <item>
         <title>Word on the Street: Media roundup for the Quartz TSN Ethernet switch</title>
         <link>https://www.broadcom.com/blog/word-on-the-street-media-roundup-for-the-quartz-tsn-ethernet-switch</link>
         <guid>https://www.broadcom.com/blog/word-on-the-street-media-roundup-for-the-quartz-tsn-ethernet-switch</guid>
         <pubDate>June 21, 2017</pubDate>
         <description>From Rick Merritt at EE Times: “The 28-nm StrataConnect BCM53570 targets a broad range of systems, including self-driving cars, cellular base stations, professional audio/video gear, and high-end industrial automation systems. It is sampling now in two versions with configurable port speeds starting from a switch with 24-Gbit ports and four 10G ports ‘We are providing determinism at layer 2, guaranteeing delivery of packets within minimum and maximum tolerances,” said John Mui, director of marketing for Broadcom’s core switch group. “Nothing in Ethernet has been able to do that, although there are some industrial real-time protocols similar to Ethernet, but not supported as IEEE standards.’ ” From Charlie Demerjian at SemiAccurate: “Once you get this far you can further reduce latency by choosing simpler protocols, the hardware does the hard parts for you. The more you look at the benefits, the more they snowball into lower latencies and simpler code. Simpler code usually means less bugs but also can mean lower costs for OEMs too. In addition to the first order benefits, TSN has a lot of secondary values. Overall TSN brings higher level OSI stack features to L2 and does them in hardware without user intervention. This cuts out latency and adds reliability in a much simpler way, for the user anyway, and does it in a uniform fashion. In theory TSN Ethernet should be as simple as Ethernet with no proprietary stacks to integrate into your solution.” From Michael Cooney at NetworkWorld: “The key point of the BCM53570 family is that it fully implements TSN standards right out of the chute,” said Jeff Nightingale, senior product line manager, Core Switch Group at Broadcom. “That means it supports Time Synchronization where nodes in network are synchronized with master timing (802.1AS Rev, 1588v2); guaranteed low latency for high priority packets even</description>
      </item>
      <item>
         <title>Word on the Street: Media roundup for Broadcom's 802.11ax Max WiFi</title>
         <link>https://www.broadcom.com/blog/word-on-the-street-media-roundup-for-broadcom-s-802-11ax-max-wifi</link>
         <guid>https://www.broadcom.com/blog/word-on-the-street-media-roundup-for-broadcom-s-802-11ax-max-wifi</guid>
         <pubDate>September 13, 2017</pubDate>
         <description>From Tom Rodgers in Broadband Deals: “The sixth generation of WiFi is coming. Broadcom, who manufacture chips for use in routers, say they’ve completed testing on the new tech which promises four times faster downloads and six times faster uploads than is currently possible. 802.11ax ‘Max WiFi’ was designed to solve the problem where people get fast broadband but slow WiFi, because of multiple devices crowding the same network.” From CTimes: “Broadcom launched Max WiFi, the family of connectivity solutions using the next Wi-Fi standard, 802.11ax. The Max WiFi chips enable up to four times faster download speeds, six times faster upload speeds, four times better coverage, and seven times better battery life than similar Wi-Fi solutions on the market today that use 802.11ac.” From Monica Allen in Fierce Wireless and Tech Investor News: “Dubbed Max WiFi, with a play on the 802.ax moniker, Broadcom’s set of solutions include the BCM43684, a chip targeted for the residential Wi-Fi market; the BCM43694 optimized for use in enterprise access points; and the BCM4375, a smartphone combo chip. All of them feature OFDMA and MU-MIMO, among other things. Broadcom is sampling them to customers now in retail, enterprise and smartphone, service provider and carrier segments. Expectations call for any new products in 2018 to be based on 802.11ax. Broadcom already has a bevy of customers teed up, with support from everyone from Microsoft to ASUS.” From Anna Ribeiro in IoT Innovator: “Broadcom Limited upgraded its position in Wi-Fi by launching Max WiFi, its line of connectivity solutions using the next Wi-Fi standard, 802.11ax. The Max WiFi chips enable up to four times faster download speeds, six times faster upload speeds, four times better coverage, and seven times better battery life than similar Wi-Fi solutions on the market today that use 802.11ac. The chips</description>
      </item>
      <item>
         <title>Broadcom’s Trident 3 enhances ECMP with Dynamic Load Balancing</title>
         <link>https://www.broadcom.com/blog/broadcom-s-trident-3-enhances-ecmp-with-dynamic-load-balancing</link>
         <guid>https://www.broadcom.com/blog/broadcom-s-trident-3-enhances-ecmp-with-dynamic-load-balancing</guid>
         <pubDate>September 29, 2017</pubDate>
         <description>

						

The workloads that are running in datacenters are evolving as technologies like SDN and containers become more prevalent. Equal Cost Multipath (ECMP) has been the main tool for balancing link-level loading in datacenter networks. However, datacenter networks are continuing to scale up in both size and bandwidth. The static hashing algorithm, which constitutes the core of ECMP’s load-balancing, has become more complex over time to adapt to the constantly changing nature of datacenter workloads. But static hashing has started to reach the limits of its capabilities to handle the more dynamic and bursty nature of datacenter traffic.

Trident 3 introduces Dynamic Load Balancing (DLB). DLB is a hardware engine that works at line rate to constantly monitor ECMP link loading. If DLB detects that the ECMP static hash is not doing a good job at keeping link-loading balanced, the DLB engine can re-assign flows to a different ECMP link to maintain the balance. DLB can also handle link-failure events (e.g. failing optics modules) autonomously in hardware, minimizing traffic loss and improving system availability.

For more details on DLB and the various problems it can solve, please press Play on the image above and enjoy the video.



LEARN MORE

In the Broadcom blog:  New Trident 3 switch delivers smarter programmability for enterprise and service provider datacenters

The news release is here:  Broadcom Delivers High Performance Data Plane Programmability with new Trident 3 Generation of 10/25/100G Ethernet Switches

</description>
      </item>
      <item>
         <title>Maximizing the Value of Romley Servers with NPAR</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/maximizing-the-value-of-romley-servers-with-npar/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/maximizing-the-value-of-romley-servers-with-npar/</guid>
         <pubDate>February 17, 2012</pubDate>
         <description>In last week's blog post, I provided Broadcom's perspective on the soon-to-be-released Romley-based servers and what it means for the industry. But before I get started, I want to answer a question about Romley and how it relates to Intel's new Sandy Bridge-EP processor (also on the verge of release). To set the record straight, Romley is actually Intel's code name for the server platform that employs the Sandy Bridge-EP CPU, so we tend to use them somewhat interchangeably. In this week's post, I'd like to focus on another key technology that's being deployed by network managers and will continue to be instrumental during the Romley cycle.The technology is Network Interface Controller (NIC) partitioning or what is commonly referred to as NPAR. NPAR is very important because it gives administrators the ability to divide a single fat network port into as many as four logical ports and allocate the fat-pipe bandwidth into whatever configurations best fit the application. The end benefit is a better allocation of server resources and better management of those resources, both of which contribute to lowering infrastructure and operating costs. About a year ago, Broadcom introduced a Network Daughter Card (NDC) that delivers either quad-port 1GbE or dual-port 10GbE. The BCM57712-k adapter suddenly gave network administrators the ability to deploy 10GbE on two physical ports or divide the bandwidth into multiple logical ports (up to four per physical port or eight per controller).Its a very effective solution that tailors network bandwidth to the need at hand. It also allows for dynamic reconfiguration of that bandwidth.NPAR continues to expand into network systems worldwide. Some of you may be asking, but do I also have to buy new and expensive NPAR-enabled switches to use that feature? The answer is a resounding no.The NPAR feature of Broadcom's 57712-k is</description>
      </item>
      <item>
         <title>Broadcom at Interop: Next-Generation Data Centers Shift into High Gear</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/broadcom-at-interop-next-generation-data-centers-shift-into-high-gear/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/broadcom-at-interop-next-generation-data-centers-shift-into-high-gear/</guid>
         <pubDate>May 4, 2012</pubDate>
         <description>Ten years ago, data centers didnt know how good they had it: businesses hummed along sending (relatively) small presentations and documents over the network with perhaps an occasional graphic or photo, as well.Today, data centers grapple with a massive amount of complex, real-time information including videos, multimedia decks and streaming web. In order to keep up, enterprise technology is moving forward at a rapid rate as it responds to many of the same trends that consumer face, such as increasing internet traffic and rising demand for high-bandwidth content.

It's an exciting time for Broadcom's engineers working on next-generation data center solutions and this weeks launch of the new BCM84790 and BCM84793 gearbox physical layer transceiver (PHYs) are no exception.The first in their class to support 10GbE, 40GbE and 100GbE line interfaces, the chips address the exponential rise in the demand for high-bandwidth applications, such as video streaming and online file sharing, as well as the higher density processing and transmission of data those demands produce.The chips also lay the foundation for a predicted 50x increase in 100 GbE ports from 2011 to 2016 (according to Infonetics) while maintaining compatibility for existing 40G and 10G rates.

With Interop fast approaching, we encourage attendees to stop by our booth to see the technology in action.Others can find our full lineup of news from the show by following us on Twitter or tracking show news by using the #interop hashtag.

Full Coverage: Broadcom at Interop 2012

	Broadcom at Interop: Power Consumption Technology Plays Important Role
	Broadcom at Interop: Energy Efficient Ethernet is Good for the Planet
	Technology Moving at the Speed of Life: Broadcom Enables Massive Network Scalability
	Enterprise 2.0: Broadcom puts Network Managers in the Fast Lane

 </description>
      </item>
      <item>
         <title>Technology Moving at the Speed of Life: Broadcom Enables Massive Network Scalability</title>
         <link>https://www.broadcom.com/blog/technology-moving-at-the-speed-of-life-broadcom-enables-massive</link>
         <guid>https://www.broadcom.com/blog/technology-moving-at-the-speed-of-life-broadcom-enables-massive</guid>
         <pubDate>May 2, 2012</pubDate>
         <description>As the popularity of social networking, streaming video and high bandwidth business services continues to climb, demand for higher-speed networks is growing at an astounding pace.

As consumers and business professionals, however, we arent always aware of the technology needed to ensure data moves along with the speed of our lives.Large data centers with literally thousands of servers require 100 Gbps network connectivity from the core to the edge while large service provider networks require high density core switching platforms with 100 Gbps interfaces to support the increasing access capacities like 10G PON.

As a result, scalable, energy efficient (and not to mention affordable) 100 Gigabit Ethernet platforms are fast becoming a key requirement for the newest switching infrastructures.

This week, we unveiled our latest solution designed to power the next-generation data center.The BCM88650 series enables the design of switching platforms with densities up to 4,000 100GbE ports  delivering terabit connectivity from the edge to the core of the network.

With the industry's highest level of integration, the BCM88650 system on chip (SoC) combines the features and functionality of a complete line card into a single chip.Together with Broadcom's innovative FE1600 (BCM88750) fabric, the BCM88650 SoC enables a new generation of high density networking solutions exceeding 100 terabits per second (Tbps).

Come by our Interop booth next week to see the technology in action or visit our website to learn more.

Full Coverage: Broadcom at Interop 2012

	Broadcom at Interop: Power Consumption Technology Plays Important Role
	Broadcom at Interop: Energy Efficient Ethernet is Good for the Planet
	Enterprise 2.0: Broadcom puts Network Managers in the Fast Lane
	Broadcom at Interop: Next-Generation Data Centers Shift into High Gear
</description>
      </item>
      <item>
         <title>Broadcom Tackles Cloud Control at VMworld</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/broadcom-tackles-cloud-control-at-vmworld/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/broadcom-tackles-cloud-control-at-vmworld/</guid>
         <pubDate>August 28, 2012</pubDate>
         <description>Broadcom is at the center of the technology thats powering the infrastructure for modern-day data centers, an infrastructure that not only can support an exponential increase in bandwidth and performance but also deliver such scale within the bounds of cloud-scale economics and without compromising features. In a blog post earlier this month, I chimed in about how the demands being placed on enterprise networks are leading to a new era of cloud-scale networking that can meet the needs of the next-generation data center.The demand for public and private clouds is showing no signs of slowing as growth is predicted to increase by 50 percent over the next three years. IT managers are facing real challenges.This week at the VMworld conference in San Francisco, we are joining our partners in showcasing the technologies that can address these real-world challenges.Today, Broadcom unveiled its latest innovation in enhancing cloud-scale networking the StrataXGS Trident II Switch Series. Read the press release announcing Trident II. The new Trident II series is a breakthrough technology that delivers the world's highest 10/40GbE switch density and unique feature innovations, saving data center managers money while allowing them to keep pace with increasing demands on their networks. Different Networking Environments Whats key is the flexibility that allows the switching technology to work across different networking environments -- so-called private, public and hybrid clouds, using common hardware building blocks for network switch equipment, enabling greater development and deployment scale and economies. A private cloud network in large enterprises is tied to the equipment that meets the specific needs of enterprise applications, some of them legacy.Public cloud networks, by contrast, tend to be more open, built on a greenfield design that allows more application deployment flexibility and cost effective scaling to suit the needs of varied tenant requirements.They are also designed</description>
      </item>
      <item>
         <title>That's a Wrap! Interop '13 Brings SDN into Focus</title>
         <link>https://www.broadcom.com/blog/thats-a-wrap-interop-13-brings-sdn-into-focus</link>
         <guid>https://www.broadcom.com/blog/thats-a-wrap-interop-13-brings-sdn-into-focus</guid>
         <pubDate>May 9, 2013</pubDate>
         <description>This year at Interop, top IT professionals, bloggers and industry analysts converged in Las Vegas to see the future of networking on display. [caption id=&quot;attachment_8878&quot; align=&quot;alignright&quot; width=&quot;300&quot;] A look inside Broadcom's Booth at Interop.[/caption] As for Broadcom, the company announced products that are driving the next wave of networking innovation: Enabling enterprise and SMB-level companies to adopt the cloud and deploy other advanced networking technologies and bringing the power of big data to every business, regardless of size. Broadcom also released an advanced enterprise access point system-on-a-chip, (read: mega router for businesses) that supports super-fast and power efficient 802.11ac wireless networking, or 5G WiFi. The new releases landed some press buzz, and many tech reporters zeroed-in on Broadcom's commitment to power efficiency a major concern for all players in the industry. Tech Zone 360: Broadcom Introduces Two New System-on-a-chip (SoC) Processors at Interop Enterprise Networking Planet: Broadcom Secures WiFi with New Silicon Converge Network Digest : Broadcom's next gen 40nm PHYs Promise 40% Power Savings This years Interop was billed as the coming out party for Software Defined Networking (SDN), and Broadcom contributed to the conversation in a panel discussion with Ram Velaga and a keynote with other execs from Microsoft and VMware.They each discussed the future of networking and how companies will adapt to an increasingly software-defined world. Light Reading: What Applications will want from SDN Enterprise Networking Planet: Interop Panel Tackles SDN Fierce Enterprise Communications: Networks must get Faster and Flatter Judges Broadcom's Ramaswami Also attracting a lot of attention was the announcement by Facebook that Broadcom is one of the companies being invited to help develop the first open source switch.Our engineers will join peers from other companies to develop a standard for switches that will become part of the Open Compute Project. New York Times: Opening</description>
      </item>
      <item>
         <title>New Ethernet Spec Gains Steam, More Members Back 25/50 Gbps Standard</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/new-ethernet-spec-gains-steam-more-members-back-2550gbps-standard/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/new-ethernet-spec-gains-steam-more-members-back-2550gbps-standard/</guid>
         <pubDate>September 8, 2014</pubDate>
         <description>The industry consortium thats pushing 25 Gigabit-per-second Ethernet a new technology standard that promises a big jump in data center performance while lowering costs is starting to take root. Cisco Systems, Dell, Brocade and Juniper are among the most recent members to join the 25 Gigabit Ethernet Consortium, a group spearheaded by Broadcom and other cloud-scale data center players to promote a new Ethernet networking industry standard. Meanwhile, IEEE has taken the first step to establishing industry-wide specs on 25 Gbps Ethernet, with a recent vote by the global standards-setting body. At a July meeting of the IEEE, 121 of 148 engineers voted to start a group to work on the interface that will link rack-mounted servers to top-of-rack switches in large data centers, EE Times reported. The working group will help establish specs for products that support the 25 Gbps Ethernet speed. [cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_right&quot;] Among those at the meeting, 59 said they would join the working group while representatives of 39 companies expressed interest, including engineers from Applied Micro, Broadcom, Fujitsu, Hewlett-Packard, IBM, Juniper, Marvell and ZTE, the report showed. Cabling costs the biggest capital expense for operators of mega-scale data centers like those run by tech giants Facebook, Microsoft and Google is where the 25 Gbps / 50 Gbps Ethernet architecture can make a substantial difference. Cutting that wiring cost in half while getting a boost in performance is a huge selling point for operators who were previously intending to scale up to 40 Gbps in the access/leaf layer of the network, and to 100 Gbps Ethernet in the aggregation/spine layer. Current IEEE industry standards use four physical lanes running at 10 Gpbs to reach that 40 Gbps speed, which the Consortium says is less ideal compared to the 25 Gbps and 50 Gbps Ethernet technologies. That</description>
      </item>
      <item>
         <title>Engineers in China: Broadcom's NFC chips are &quot;Product of the Year&quot;</title>
         <link>https://www.broadcom.com/blog/wireless-technology/engineers-in-china-broadcoms-nfc-chips-are-product-of-the-year/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/engineers-in-china-broadcoms-nfc-chips-are-product-of-the-year/</guid>
         <pubDate>February 26, 2012</pubDate>
         <description>Broadcom's engineering excellence around Near Field Communications technology has been recognized by a respected group of Chinese engineers and, this week at the Mobile World Congress conference, some of that top-notch technology is being showcased.

The EETimes-China Annual Creativity in Electronics (ACE) Award for Product of the Year was given to Broadcom's BCM2079x family of chips, which are designed to enable mass deployment of NFC is electronics devices.

NFC technology is sparking new innovations in device-to-device data/video transfer, pairing and connections, such as those used to enable advanced mobile payment systems.NFC makes smartphones even smarter and the fact that engineers in China have offered their endorsement of Broadcom's work in this area &quot;speaks volumes,&quot; said Craig Ochikubo, VP and GM for the WPAN line of business at Broadcom.In a statement, Ochikubo said:
NFC has the potential to transform the use of smartphones.Not only is there the contactless mobile payments and ticketing opportunity, but equally or more exciting is the ability to enable radically simplified connectivity between the handset and other devices like Bluetooth headsets and Wi-Fi-enabled digital televisions.Were honored that our industry recognizes the disruptive potential of NFC.We firmly believe that NFC is opening the doors to next evolution of wireless innovation.
Broadcom is demonstrating its NFC innovations this week at Mobile World Congress 2012 in Barcelona.

Related coverage:

	Broadband at Mobile World Congress
	Broadcom NFC Solutions
	BCM2079x Family Product Page
</description>
      </item>
      <item>
         <title>5G WiFi: Introducing a Wi-Fi Powerful Enough to Handle Next-Gen Devices and Demands</title>
         <link>https://www.broadcom.com/blog/wireless-technology/5g-wifi-introducing-a-wi-fi-powerful-enough-to-handle-next-gen-devices-and-demands/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/5g-wifi-introducing-a-wi-fi-powerful-enough-to-handle-next-gen-devices-and-demands/</guid>
         <pubDate>May 14, 2012</pubDate>
         <description>How many of us remember a time before electricity? Or the telephone? Or cable TV? How about Wi-Fi? Like many other technology breakthroughs, Wi-Fi is well on its way to becoming as mainstream as the electrical outlet.And just like the outlet changed through the years to become safer and more energy efficient, Wi-Fi is turning a corner into a next-generation version thats faster, more reliable and better equipped to handle not only todays demands but the wave of future demands that will come from more users, more devices and more data. [caption id=&quot;attachment_2552&quot; align=&quot;alignright&quot; width=&quot;300&quot;] Click to see the evolution of Wi-Fi[/caption] The number of Wi-Fi-enabled devices has already grown exponentially in the last decade or so, as has the number of places where Wi-Fi is accessible.A recent survey found that free Wi-Fi was the most wanted amenity among hotel guests, beating out free breakfast and free parking.Today, the number of new devices connecting to Wi-Fi is growing far beyond the traditional PC.Smartphones, tablet computers, game consoles and the TV are among the latest to tap into the network.Next up are home appliances like thermostats, washing machines and refrigerators.The growth is so fast that researchers predict that there will be 5 billion Wi-Fi connected devices worldwide by 2014. The onslaught of new ways to use Wi-Fi calls for something thats faster and more reliable, powerful enough to cover a broader range and robust enough to handle more devices transmitting data-heavy content, including high-definition video. Meet 5G WiFi officially known as IEEE 802.11ac.At CES 2012, Broadcom introduced the worlds first 802.11ac chips, enabling gigabit connectivity where previously only megabit speeds had existed.This is a major turning point for Wi-Fi because it not only meets the growing demands of today but also helps fuel a new ecosystem of communication, entertainment and productivity</description>
      </item>
      <item>
         <title>5G WiFi: Pioneering the New Generation of Wireless Connectivity</title>
         <link>https://www.broadcom.com/blog/wireless-technology/5g-wifi-pioneering-the-new-generation-of-wireless-connectivity/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/5g-wifi-pioneering-the-new-generation-of-wireless-connectivity/</guid>
         <pubDate>June 15, 2012</pubDate>
         <description>In the tech industry, five years can be an eternity. The iPhone hit retail shelves in June 2007 five years ago this month.That same year, 802.11n the fourth-generation of Wi-Fi technology was introduced as a wireless technology that would meet the new consumer demands for medium-resolution video, such as those found on YouTube, a then two-year-old start-up that had just been acquired by Google. Today, smartphone shipments around the globe are up more than 600 percent since those long ago days of 2007.Tablet PCs such as the iPad, which werent even on the consumer radar five years ago, have reached mainstream penetration.Nearly 73 million tablets were shipped worldwide in 2011, according to research firm NPD DisplaySearch.Now, things like Internet-connected gaming consoles, set-top boxes and TVs are joining the Wi-Fi ecosystem. [caption id=&quot;attachment_3001&quot; align=&quot;alignright&quot; width=&quot;300&quot;] Click on the image for an interactive graphic that explores the power of 5G WiFi[/caption] Certainly, the architects of 802.11n did not design the technology with this sort of usage in-mind.Just as the second generation of Wi-Fi, designed for emailing, was succeeded by a third generation built to support a data rich Web surfing experience, the evolution of Wi-Fi continues today with the arrival of 802.11ac, or 5G WiFi. Broadcom kick-started the 5G WiFi movement with the announcement of 802.11ac chipsets at the 2012 Consumer Electronics Show, introducing the first steps in building a more robust and reliable wireless pipeline. While 5G WiFi is designed to meet the needs of todays consumers and their computing lifestyles, the engineers have also looked ahead at the other uses that 5G WiFi is poised to accelerate.Consider that 5G WiFi works on a spectrum thats different from its predecessor and uses beam-forming and other innovations to penetrate all forms of building materials, including concrete.Its a shift that will help eliminate</description>
      </item>
      <item>
         <title>Miracast Technology Brings Wireless Streaming to the Living Room</title>
         <link>https://www.broadcom.com/blog/wireless-technology/miracast-technology-brings-wireless-streaming-to-the-living-room/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/miracast-technology-brings-wireless-streaming-to-the-living-room/</guid>
         <pubDate>January 3, 2013</pubDate>
         <description>Moving high-definition content from a small-screen device such as a smartphone or a tablet to a bigger screen, such as a TV or desktop computer, can be a challenge.But thanks to a technology standard thats on the verge of going mainstream, those headaches are about to become a thing of the past. Meet Miracast, a technology that CNET Australia has called a near-perfect wireless streaming solution.At next weeks International Consumer Electronics Show in Las Vegas, expect to hear about a growing number of devices that are outfitted with Miracast, which is actually a Wi-Fi standard that relies on technology dubbed wireless display mirroring. The idea is for consumers to stream content between Wi-Fi connected devices seamlessly, without an intermediate box such as a router or gateway.Think of Miracast as a seal of approval for electronics devices so that problems with compatibility and interoperability become a thing of the past. The standard has been promoted by the Wi-Fi Alliance and Broadcom for some time.In September, the Wi-Fi Alliance handpicked Broadcom's technology for its Miracast test bed. And some big name CE players have already signaled their support for Miracast, including handset and TV makers Samsung and LG. Embedded companies also have hopped on board, including Intel, Ralink, Marvell, Texas Instruments, Realtek and MediaTek. CES 2013 is likely to be Miracasts true coming-out party with the industry, with hundreds of Miracast-enabled products on the show floor.Miracast is one of the top trends forecast by Broadcom at our December &quot;Geek Peek.&quot; At the show today, Broadcom is announcing partnerships with top tech players and retailers including Google (debuted in Android 4.2), Roku, NVidia, Best Buy and moreto promote Miracasts adoption. Broadcom's contribution to the Miracast ecosystem is in the form of a robust, complete software stack that allows smartphone, display, smart TV and</description>
      </item>
      <item>
         <title>Miracast Makes a Splash with Partners at Mobile World Congress</title>
         <link>https://www.broadcom.com/blog/wireless-technology/miracast-makes-a-splash-with-partners-at-mobile-world-congress/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/miracast-makes-a-splash-with-partners-at-mobile-world-congress/</guid>
         <pubDate>February 25, 2013</pubDate>
         <description>The geek love for Wi-Fi Certified Miracast, an innovative Wi-Fi standard that relies on a technology dubbed wireless display mirroring, seemed to be inevitable when we first started talking about it.The technology, which allows consumers to easily and seamlessly share media from one device to another over Wi-Fi, was designed for todays multimedia lifestyles. The Miracast technology standard is built on the premise that users have a ton of content on their smartphones and laptops photos, videos and games, to name a few that theyd like to engage with over a large screen.But for users to seamlessly transfer that content, the technology standard must reside in both the device and the display. Thats where Broadcom comes into the game.Broadcom and its partners are looking to spur on Miracast adoption with software and hardware that aims to speed up the integration of Miracast into electronics, getting the technology to consumers faster. At Mobile World Congress today, Broadcom is highlighting recent Miracast partnerships with top tech players and retailers including Google (in Android 4.2), Intel, NVIDIA, Best Buy Stores and more. Adoption is expected to pick up steam this year as Miracast shows up in PCs, smart TVs and gaming platforms worldwide, including products like the Nexus 4, some of LG Electronics TVs and Optmus G smartphones, Samsungs Galaxy S III smartphone and others.Some 1.5 billion Miracast devices are expected to ship in 2016, according to ABI Research. Related Video: Wi-Fi Alliances Miracast Demo from CES 2013 Broadcom's contribution to the Miracast ecosystem is in the form of a robust, complete software stack that allows smartphone, display, smart TV and set-top box makers to roll out the technology in their newest products. [caption id=&quot;attachment_7541&quot; align=&quot;alignright&quot; width=&quot;240&quot;] Best Buy's Rocketfish Miracast Video Receiver, Broadcom tech inside.[/caption] Broadcom is also offering an off-the-shelf wireless</description>
      </item>
      <item>
         <title>Connect with Broadcom in the Mobile World Capital: Looking at Tech from the Inside-Out</title>
         <link>https://www.broadcom.com/blog/wireless-technology/connect-with-broadcom-in-the-mobile-world-capital-looking-at-tech-from-the-inside-out/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/connect-with-broadcom-in-the-mobile-world-capital-looking-at-tech-from-the-inside-out/</guid>
         <pubDate>February 18, 2013</pubDate>
         <description>It used to be that the sleekest and prettiest gadget would get all of the attention at technology trade shows but now, as Mom always said, its whats on the inside that counts. [caption id=&quot;attachment_7396&quot; align=&quot;alignright&quot; width=&quot;300&quot;] Fira de Barcelona Gran Via, new home of Mobile World Congress.Source: Toyo Ito Architects &amp; Associates[/caption] At last months Consumer Electronics Show in Las Vegas, the C in &quot;CES&quot; might as well have been for Component, instead of Consumer.Given the attention that chip companies like Broadcom received from bloggers, tweeters and the like, it's clear that people are looking anew at what goes into their devices. Chipmakers and other embedded component companies have generally been quiet when it comes to industry gadget-fests but not so anymore.Tech reporters, trade press and consumers increasingly care more about speeds and feeds and how those specs are changing the user experience, especially as the market continues to be cluttered with hundreds of lookalike smartphones and tablets. The shift in interest is likely to extend to the Mobile World Congress trade show in Barcelona later this month, which is expected to draw 70,000 people from 200 countries. Broadcom and its partners are set to have a big showing this year at MWC, and will join in the lively industry conversations around whats next in mobile connectivity, including: the upcoming transition to LTE networks, Near Field Communication, small cells, wireless media sharing between devices, global positioning technologies and much more. Mobile World Congress is the industrys biggest showcase for new handheld devices.Theres a lot of anticipation about what will be on display from many of the biggies, including Samsung, LG, HTC, Intel, Microsoft and others. Barcelona, which was dubbed the mobile world capital in 2011, is living up to its moniker.This years show is set to feature a unique</description>
      </item>
      <item>
         <title>That's a Wrap! Broadcom Makes Headlines at Mobile World Congress</title>
         <link>https://www.broadcom.com/blog/thats-a-wrap-broadcom-makes-headlines-at-mobile-world-congress</link>
         <guid>https://www.broadcom.com/blog/thats-a-wrap-broadcom-makes-headlines-at-mobile-world-congress</guid>
         <pubDate>February 28, 2013</pubDate>
         <description>Now that the whirlwind energy of Mobile World Congress is starting to wind down, we're reflecting on what weve learned at the show. Broadcom made a slew of big announcements and showed off a lot of buzz-generating demos, including our 4G LTE Advanced modem and the key features that make it attractive to wireless carriers and consumers alike; new advancements in geofencing technology built into our GPS chips; the growing momentum of 5G WiFi with a design win in an award-winning smartphone; a meaningful entrance into the small cells market for mobile and broadband operators and even some neat PC tech that helps redefine the concept of mobile computing. Check out the sites of Mobile World Congress 2013 on Facebook. It's clear that the mobile space is as competitive as ever.We also saw that embedded technologies, such as LTE processors and NFC chips are playing an important part in the conversation as consumers continue to get more savvy about their mobile lifestyles. Below are some of the top trends that surfaced above the show floor's thrum, and how Broadcom fits into the stories. Broadcom's LTE Makes Noise Our pre-show LTE announcement made a lot of analysts and carriers take note, as competition has arrived in the LTE processor marketplace. Our built-in carrier aggregation technology was also a huge hit, as it will help carriers avoid spectrum crunch and deliver the fast streaming speeds mobile consumers are demanding. Heres what they had to say: Trefis Team, Forbes: The 4G LTE modem along with Broadcom's leading Wi-Fi, Bluetooth, GPS and NFC technologies provide manufacturers with a comprehensive product offering needed to build advanced mobile devices. Phil Goldstein, Fierce Wireless: Broadcom announced its intention to enter the LTE modem market with a chip the company boasted as one of the world's smallest, fastest</description>
      </item>
      <item>
         <title>Vijay Nagarajan in The Beacon: Wi-Fi CERTIFIED ac: One Year in, Much to Celebrate</title>
         <link>https://www.broadcom.com/blog/wireless-technology/vijay-nagarajan-in-the-beacon-wi-fi-certified-ac-one-year-in-much-to-celebrate/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/vijay-nagarajan-in-the-beacon-wi-fi-certified-ac-one-year-in-much-to-celebrate/</guid>
         <pubDate>June 19, 2014</pubDate>
         <description>Editors Note: Broadcom experts often weigh in on popular topics on industry sites around the Web.Below is a reprint of a story that appeared in the Wi-Fi Alliance's blog The Beacon, in which Vijay Nagarajan, Director of Product Marketing for Broadcom Corporations Wireless Connectivity Group, talks about the anniversary of 802.11ac being certified by the Wi-Fi Alliance. From The Beacon: One thing worth celebrating this monthin the high-tech industry, that isis the one-year anniversary of Wi-Fi CERTIFIED ac, the latest generation of Wi-Fi, based on the 802.11ac standard. Wi-Fi CERTIFIED ac was originally conceptualized as a way to deal with the increasing number of connected electronic devices and the massive amounts of data they demand.For proof of these trends, one need look no further than the latest market statistics.IDC analysts, for example, predict that by the end of 2020, there will be an installed base of 212 billion devices connected, 30.1 billion of which will be connected autonomous things.And, according to the Cisco Visual Networking Index, by 2018 mobile data is expected to reach 18 exabytes per month (1 exabyte = 1 billion gigabytes). Wi-Fi CERTIFIED ac helps solve this connectivity and data explosion conundrum by delivering greater speeds, better coverage and better power efficiency.Just consider a 3x3 Wi-Fi CERTIFIED ac device, which is common to most 802.11ac routers and high-end computing devices; it can serve up to 1.3 Gbps of data.With Wi-Fi CERTIFIED ac, greater speeds result from the expanded use of bandwidth, up to 80 MHz, and increased data transfer efficiency.The use of technologies like beamforming enable increased coverage as well. Over the past year, great progress has indeed been made.The number of Wi-Fi CERTIFIED ac devices on the market has grown and a number of leading smartphones now include 802.11ac for connectivity. But the momentum doesnt stop</description>
      </item>
      <item>
         <title>Tomorrow's Mobile Network Delivered Today</title>
         <link>https://www.broadcom.com/blog/tomorrows-mobile-network-delivered-today</link>
         <guid>https://www.broadcom.com/blog/tomorrows-mobile-network-delivered-today</guid>
         <pubDate>April 5, 2012</pubDate>
         <description>At Broadcom, we're committed to enabling the mobile lifestyle with content and connectivity when and where people want it.In 2012, the number of connected devices is set to exceed the world's population with an estimated 7 billion devices. [caption id=&quot;attachment_1673&quot; align=&quot;alignright&quot; width=&quot;280&quot;] Click for Interactive Graphic[/caption] Service providers around the globe are racing to transform their mobile networks to fulfill consumer appetite for more and more bandwidth.Broadcom's end-to-end solutions are delivering the mobile network of the future today, redefining the mobile experience with enhanced connectivity and a more reliable network. Spanning the entire network from the Access Point to the Edge, to the Aggregation and finally the Core Broadcom high-bandwidth solutions support the mobile experience consumers crave with high quality voice connections, faster app downloads and uninterrupted video streaming. The Mobile Network Transformation To facilitate this unprecedented increase in bandwidth requirements, service providers are transforming their networks, abandoning legacy Time-Division Multiplexing (TDM) technology in favor of packet-based Ethernet networks as the standard medium for the mobile backhaul of new and future 3G and 4G services. Recognizing the serious limitations of TDM, service providers and their equipment suppliers are turning to Broadcom's industry-leading Ethernet-based solutions to manage the massive new bandwidth requirements of 3G and 4G/LTE networks. Already a proven technology, Ethernet is ideal for the performance and economic challenges presented by the explosive growth of mobile traffic.It provides a cost-effective connectivity solution that scales to meet rising bandwidth demands, providing up to 1,000 times higher bandwidth than a TDM-based connection, at a significantly lower cost.These and other characteristics make it easy to see why Ethernet is expected to become the dominant carrier backhaul technology, approaching 100 percent usage in base stations by 2014, according to the Infonetics Mobile Backhaul Forecast, 4Q10. Better Backhaul Differentiates the Mobile Experience Carriers must backhaul</description>
      </item>
      <item>
         <title>Separating Signal from Noise a Bigger Job with LTE Carrier Aggregation</title>
         <link>https://www.broadcom.com/blog/separating-signal-from-noise-a-bigger-job-with-lte-carrier-aggregation/</link>
         <guid>https://www.broadcom.com/blog/separating-signal-from-noise-a-bigger-job-with-lte-carrier-aggregation/</guid>
         <pubDate>April 4, 2016</pubDate>
         <description>If you could analyze your monthly data usage for your familys mobile device plans, youd probably notice a trend: its only going up. Thats because of the mountain of user-generated content being created with smartphones (think of all the high-res photos and videos uploaded to social sites such as Snapchat and Instagram) is growing ever-taller and wider. At the same time, users are increasingly consuming higher-bandwidth content.Theyre watching 4K video, conversing via videochat services, playing interactive games and streaming music -- all of which spurs mobile operators to build and maintain more robust cellular networks that can accommodate the explosion of data. Among the solutions to boost cellular wireless speeds and bandwidth on Long Term Evolution (LTE) networks is a method called carrier aggregation (CA). Network operators are increasingly moving in this direction because they want their users to have a good mobile experience, said Dennis Moy, FBAR filter product marketing manager at Broadcom.They want to increase the bandwidth of their networks and keep their subscriber base happy by ensuring quick downloads and low latency. LTE rollouts and carrier aggregation are on the upswing, and smartphone makers are getting ready for it.Nearly two-thirds of smartphones shipped in 2020 will incorporate LTE carrier aggregation, according to a January report from ABI Research.ABI forecast that 61 percent of smartphones shipped in 2020 will be LTE carrier aggregation compatible, up from just 23 percent of such devices in 2015. Carrier aggregation is among the flagship features of LTE-Advanced networks, which aim to achieve higher-speed operations by stitching together two or more wireless bands (which may not be adjacent) on the wireless spectrum. Combining two or more bands provides a single, fatter, channel whose bandwidth is, in theory, the sum of them all. Except, in practice, it often isnt.Thats because good old-fashioned noise, or</description>
      </item>
      <item>
         <title>Fibre Channel:  Why the overachiever of storage is the standard for all-flash arrays – and will remain so</title>
         <link>https://www.broadcom.com/blog/fibre-channel-why-the-overachiever-of-storage-is-the-standard</link>
         <guid>https://www.broadcom.com/blog/fibre-channel-why-the-overachiever-of-storage-is-the-standard</guid>
         <pubDate>May 1, 2017</pubDate>
         <description>Fibre Channel combines performance, extreme reliability and features that other storage protocols have been challenged to emulate. In fact, 80 percent or more of all-flash systems are configured with Fibre Channel to avoid network bottlenecks that could impact all-flash array application performance. Testing was conducted with Database and Virtualization applications using an all-flash array with Emulex Gen 6 Fibre Channel HBAs and a Brocade Gen 6 Fibre Channel switch. The tests were run a second time in an iSCSI environment using an Ethernet switch and iSCSI adapters. The results revealed substantial performance advantages by using a Gen 6 Fibre Channel environment. The Gen 6 configuration outperformed iSCSI by up to 47 percent running Microsoft SQL Server and Oracle 12c Database applications. The VM storage migration tests revealed that the Gen 6 Fibre Channel network was up to 64 percent faster for Citrix and Microsoft Hyper-V. What's exciting is that performance will be even better when running NVMe-enabled Gen 6 HBAs from Emulex with NVMe all-flash arrays. Latency has been shown to be cut in half compared to SCSI. Deploying NVMe over Fibre Channel is more straightforward than you would think. Because Emulex supports both SCSI and NVMe over Fabrics concurrently, data centers can seamlessly deploy NVMe all-flash arrays alongside existing SCSI arrays with no changes to the network needed. Other options such as NVMe over RDMA, will require significant changes and upgrades to the network including new switches and adapters, which will add significant cost and complexity to a deployment. The price of admission for a Fibre Channel network just became a lot more affordable with the new Brocade G610 Switch. The G610 is an affordable Gen 6 switch delivering a low entry cost without compromising features. The switch starts at 8 ports and can be upgraded to 24 ports,</description>
      </item>
      <item>
         <title>Bring it On: Bandwidth Buster 4K Television No Match for Broadcom at CES 2014</title>
         <link>https://www.broadcom.com/blog/bring-it-on-bandwidth-buster-4k-television-no-match-for-broadco</link>
         <guid>https://www.broadcom.com/blog/bring-it-on-bandwidth-buster-4k-television-no-match-for-broadco</guid>
         <pubDate>December 30, 2013</pubDate>
         <description>Theres always a bit of anticipation about the next big thing to make its debut at the annual Consumer Electronics Show in Las Vegas.But, year after year, one of the oldest gadgets on the show floor television continues to generate headlines. In previous years, the buzz around TV has been tied to breakthroughs such as DVR, Blu-Ray, networked set-top boxes and, most recently, 3-D TV.This year, all eyes are on Ultra HD TV, a successor to the high-definition resolution screens that have gone mainstream in recent years. Ultra HD also known as 4K TV because of the roughly 4,000-pixel resolution it offers actually made its debut at CES 2013 as a luxury technology with only prerecorded content to play on the screens.But just 12 months later, things have changed radically. Prices of Ultra HD sets have dropped by more than half and, as a result, sales are on the uptick and are expected to grow even more as prices continue to erode in 2014.Meanwhile, true Ultra HD content is now in sight, with Amazon and Netflix planning rollouts in the near future.The Consumer Electronics Association, which hosts the annual trade show, forecasts Ultra HD unit shipments to reach 450,000 in 2014, an eight-fold increase.By 2015, sales charts could show hockey stick growth. But theres one more piece to the puzzle that cannot be ignored and thats where Broadcom enters the equation. Delivery of Ultra HD content requires bandwidth and a lot of it.And while theres expectations that the infrastructure that powers and delivers Internet connectivity will eventually expand to meet the new demands, Broadcom is offering the low-power, lower-cost compression technology to meet those needs today. [cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_left&quot;]Broadcom taps the high-efficiency video codec (HEVC) technology on the H.265 standard, which improves the coding efficiency.Ultimately, it can deliver roughly double</description>
      </item>
      <item>
         <title>Roundup: Broadcom Makes an Impression at CES 2013</title>
         <link>https://www.broadcom.com/blog/roundup-broadcom-makes-an-impression-at-ces-2013</link>
         <guid>https://www.broadcom.com/blog/roundup-broadcom-makes-an-impression-at-ces-2013</guid>
         <pubDate>January 14, 2013</pubDate>
         <description>As we wind down from the excitement and sensory overload of last week's International Consumer Electronics Show in Las Vegas, members of the tech press keep talking about Broadcom's technologies that were showcased at CES. Among the miles of exhibits on the show floor and the sight of perhaps millions of gadgets to gawk at, Broadcom's innovations rose above the din and garnered some buzz from the reporters and bloggers who covered the annual mega-event. Included among the Broadcom standouts were our mobile and set-top box products announcements, particularly our system-on-a-chip technology that's helping operators deploy content for the much-hyped Ultra HD televisions that were all the rage at the show.Ganesh T S from AnandTech called the unveiling of the BCM7445 &quot;undoubtedly the most exciting news to come out of the Broadcom camp for CES 2013. Below is a sampling of from around the web of what the tech media had to say about Broadcom's announcements at the show. Ultra HD TV When 4K TV, now known by the more buzzy Ultra HD moniker, finally makes its way to living rooms stateside, there's a good chance it'll arrive on the back of Broadcom's tech. Joseph Volpe, Engadget, &quot;Broadcom's new ARM-based chip boosts Ultra HD TV into living rooms of the future.&quot; CNET: &quot;Broadcom chip ushers in H.265 and UltraHD video&quot; by Stephen Shankland ZDNET: &quot;Broadcom debuts 'game changer' chip for HDTVs&quot; by Rachel King Mobile Broadcom is generally known for its wireless technology prowess, but dont be surprised to see the company be the chip that puts the smarts in your smartphone or tablet. Kevin C.Tofel, Giga Om,&quot;Why Broadcom wants to be the smarts in your next smartphone.&quot; The Verge: &quot;Qualcomm just got some competition as Broadcom enters the mobile processor market&quot; by Dan Seifert AnandTech.com: &quot;Broadcom teases its first</description>
      </item>
      <item>
         <title>CES 2014: Five Reasons Why 5G WiFi is the Foundation for the Connected Home</title>
         <link>https://www.broadcom.com/blog/ces-2014-five-reasons-why-5g-wifi-is-the-foundation-for-the-con</link>
         <guid>https://www.broadcom.com/blog/ces-2014-five-reasons-why-5g-wifi-is-the-foundation-for-the-con</guid>
         <pubDate>January 10, 2014</pubDate>
         <description>LAS VEGAS Its clear that the connected home a big, futuristic idea that's a consistent trend-driver at the annual International Consumer Electronics Show will require a wide range of wireless networking technologies. From where Broadcom sits, deep in the South Hall of the Las Vegas Convention Center, its 5G WiFi (based on the speedy 802.11ac standard) that will become the centerpiece of that vision. Dino Bekis, vice president of marketing mobile wireless connectivity, in the Mobile &amp; Wireless Group at Broadcom, chatted with me about it in the companys booth and explained why consumers and service providers really do need the extra heft that comes with this next-gen standard to make the connected home a reality. Here are the top five reasons why 5G WiFi is the foundation for the connected home: Sharing Is Caring The home is being overtaken by devices, partly because individual consumers now tend to own multiple devices *mdash; smartphones, tablets, laptops that all connect to the network.Beyond that, there's a new lineup of devices that are trying to tap into the WiFi network, such as gaming consoles, as well as wireless set-top boxes like Roku and Dishs Wireless Hopper.That's bound to place some strain on the network. Now, imagine what happens when that same network is asked to stream a video clip between two of the connected devices, from a smartphone to the WiFi-connected TV using a screen-casting technology such as Miracast.This is already happening and networks are already straining.5G WiFi is fast and robust enough to handle that sort of network traffic without system hiccups. Mo Content, Mo Problems It's true that you can't really download faster than the speed at which your Internet service provider allows, even if you have faster WiFi.But it's important to remember, Bekis said, that not every piece of</description>
      </item>
      <item>
         <title>CES Panel: Broadcom's Pomerantz Sees Business Models as Key to Next-Gen Wireless Adoption</title>
         <link>https://www.broadcom.com/blog/ces-panel-broadcoms-pomerantz-sees-business-models-as-key-to-ne</link>
         <guid>https://www.broadcom.com/blog/ces-panel-broadcoms-pomerantz-sees-business-models-as-key-to-ne</guid>
         <pubDate>January 7, 2014</pubDate>
         <description>LAS VEGAS Its no secret that more data is created daily by the influx of new connected devices in our lives whether tablet computers, gaming consoles or even our cars.And consumers, who have come to love the benefits of so many connected devices, want to be able to access that data faster. The challenge for the innovators of technology, however, is how to turn these demands into business models. That was one of the topics of a CES panel called Wi-Fi to NFC: What's Next for Wireless Technology, which was made up of several tech executives, including Scott Pomerantz, a Broadcom Senior Vice President and General Manager who oversees the companys efforts around wireless connectivity combination chips. The panel attracted a large crowd, one that was engaged and interested in the future of technologies like 5G WiFi, or 802.11ac.ABI research estimates that 3.5 billion 5G WiFi chips will be sold in the next five years. Despite the projected sales, the business model remains a key ingredient to widespread adoption.One panelist argued that the paid access model for wireless connectivity has faded but that consumers might be interested in alternatives that could include advertising or flexible access on daily or weekly schedules.Because real costs are attached to the creation of these products and services, innovators have to find ways to recover their infrastructure investments. One possible model comes via AT&amp;Ts announcement at CES of so-called sponsored data, where companies pay service providers data charges for customers who access videos and apps.The panel compared this approach to Facebooks recent deal with T-Mobile, and even toll-free 800 numbers, calling it a logical step forward but not particularly novel.The business model is the key, Pomerantz explained.This is just one thats been used successfully.Im sure others will emerge. But the flavor has to be just</description>
      </item>
      <item>
         <title>Broadcom's TDLS Solutions Nab Wi-Fi Alliance Certification</title>
         <link>https://www.broadcom.com/blog/broadcoms-tdls-solutions-nab-wi-fi-alliance-certification</link>
         <guid>https://www.broadcom.com/blog/broadcoms-tdls-solutions-nab-wi-fi-alliance-certification</guid>
         <pubDate>August 23, 2012</pubDate>
         <description>Roughly 17 percent of the world connects with Wi-Fi and more than 1 billion Wi-Fi devices were shipped last year.It should come as no surprise that demand for innovation in Wi-Fi-enabed devices continues to surge. Factor in the explosion of content transmitted over Wi-Fi networks (including multimedia streaming, file transfer and data back-up) and the need to increase efficiency and capacity becomes crucial. As the first of few products selected to be in the interoperability test bed, Broadcom's TDLS solutions (Tunneled Direct Link Setup) have been given the inaugural seal of approval by the Wi-Fi Alliance. TDLS establishes a direct connection between devices within a traditional Wi-Fi network rather than transmitting via access point, or AP (see image below). Through a secure connection, TDLS: Enables direct device-to-device transmission. By directly connecting devices instead of going through the AP, networks are more efficient and able to absorb more activity. Enables linked devices to perform at the highest level of shared Wi-Fi capabilities.Users can download or stream multimedia content at the highest speed (like 5G WiFi) between smartphones and consumer electronics, such as set-top box or digital TV. Allows connected devices to switch to alternate channels. It frees up capacity on the original channel to help users enjoy better performance and less lag time when streaming and downloading multimedia content to their devices. Earning the Wi-Fi Alliance Wi-Fi CERTIFIED TDLS designation means Broadcom's solutions have passed a series of rigorous tests and processes, and have been deemed interoperable with existing Wi-Fi CERTIFIED products like smartphones, laptops, set-top boxes and more. Additionally, all TDLS-linked devices employ WPA2 encryption even if the network is using a lower encryption, ensuring the highest level of security.Wi-Fi certification for TDLS is a critical validation for users seeking to optimize network efficiency and performance. The announcement comes on</description>
      </item>
      <item>
         <title>Sophie Wilson, Co-Creator of ARM Processor, Wins Innovation Award</title>
         <link>https://www.broadcom.com/blog/broadcom-innovation/sophie-wilson-co-creator-of-arm-processor-wins-innovation-award/</link>
         <guid>https://www.broadcom.com/blog/broadcom-innovation/sophie-wilson-co-creator-of-arm-processor-wins-innovation-award/</guid>
         <pubDate>November 12, 2013</pubDate>
         <description>Two computer scientists from the U.K., one of whom is a Broadcom Distinguished Engineer, have been named this years winners in the Computing &amp; Telecommunications category of The Economists Innovation Awards. Sophie Wilson, Broadcom's Senior Technical Director of Integrated Circuit Design, and Steve Furber, a professor of Computer Engineering at the University of Manchester, are the inventors of the low-power ARM processor design that can be found in more than 90 percent of the todays smartphones around the world. Read the Press Release. The two are being recognized at a ceremony next month for their groundbreaking work on the BBC Micro and design of the ARM processor architecture.The ARM processor core is now used in thousands of different consumer electronics products, including smartphones, tablets, digital cameras and more. [caption id=&quot;attachment_10369&quot; align=&quot;alignright&quot; width=&quot;300&quot;] Photo Credit: European Patent Office[/caption] The ARM processor emerged from Britains home-computer boom in the 1980s and went on to change the world, said Tom Standage, digital editor at The Economist and chairman of the panel of 30 judges for the Innovation Awards.Wilson and Furbers role in the mobile revolution deserves to be far more widely known about. They are in good company.Also lauded was Colin Angle, founder of iRobot, who was named the winner in the No Boundaries category for his achievements in commercializing robots for consumers. Last year, Wilson and Furber were honored along with other tech titans at the 25th anniversary Computer History Museum Fellow Awards in Silicon Valley. Recognizing Wilsons Contributions Wilson helped define whats called a reduced instruction set (RISC) design for these processors, a spec that was intended to reduce costs.But when the chip was first tested, it appeared to be using no power at all.This simple, efficient design made it ideal for use in mobile devices, and it went on to</description>
      </item>
      <item>
         <title>Trio of Awards Recognize Broadcom's Impact on Mobile Growth in China</title>
         <link>https://www.broadcom.com/blog/trio-of-awards-recognize-broadcoms-impact-on-mobile-growth-in-c</link>
         <guid>https://www.broadcom.com/blog/trio-of-awards-recognize-broadcoms-impact-on-mobile-growth-in-c</guid>
         <pubDate>May 8, 2014</pubDate>
         <description>Its no secret that China is driving the next great groundswell of demand for mobile devices.Whats becoming more obvious and highlighted through recent awards is the impact that Broadcom's portfolio of connectivity products is having on the rise of mobile devices in that country, as well as in other emerging markets. Consider the state of mobile devices today: Roughly 400 million people in China more than the entire U.S.population are now using smartphones, a figure that would have been unbelievable just a decade ago.Yet, smartphones only represent a small piece of the mobile device ecosystem now that tablets, wearable gadgets and other connected home electronics are sparking momentum in the Internet of Things trend. At last months China Information Technology Expo (CITE), a large trade show considered by some to be comparable to the annual Consumer Electronics Show in Las Vegas, Broadcom's BCM4771 GNSS SoC won the Innovation Gold Award for being the worlds first single-chip GNSS solution designed for mass-market wearable devices. The BCM4771 brings Broadcom's location technology, which uses five satellite constellations for enhanced accuracy, to a platform that is small and power efficient enough to work in many types of wearable gadgets. Another Broadcom product, the BCM20736, a Bluetooth Smart SoC with wireless charging support, was named Best Application at Aprils Wireless Product Awards.The annual event is co-sponsored by Electronic Products China and 21ic.com, two top-tier trade publications for design engineers and the electronics industry. The BCM20736 is part of Broadcom's WICED family of chips SoCs that provide low-power Bluetooth and Wi-Fi for the next generation of Internet of Things devices.The judges noted that the BCM20736 enables new use cases for wearables and increases market potential by providing OEMs with a flexible solution that can be incorporated into a wider variety of devices. Finally, Broadcom's WICED family</description>
      </item>
      <item>
         <title>Broadcom at Interop: Power Consumption Technology Plays Important Role</title>
         <link>https://www.broadcom.com/blog/wired-and-wireless-operators-accelerate-bandwidth-to-literally-</link>
         <guid>https://www.broadcom.com/blog/wired-and-wireless-operators-accelerate-bandwidth-to-literally-</guid>
         <pubDate>April 18, 2012</pubDate>
         <description>Whether youre reading this post from your PC or in todays interconnected world, more likely your smartphone or tablet - theres no denying the increase in devices that we all own for business and personal use.In fact, by 2020, the number of connected devices is expected to reach 50 billion thats six devices for every person on earth. But its not just devices that consumers and professionals are using regularly.There's a constant uptick in content consumption that's also driving traffic.To look at it another way: By 2015, one million minutes of video content will cross the network every second while the number of devices connected to IP networks will be twice the global population. As such, its not much of a jump to see that as providers upgrade their networks to accelerate bandwidth and literally (and figuratively) keep pace with market trends, new technology solutions will be needed to balance the power consumption and costs for energy efficient Ethernet. Earlier this week, Broadcom announced two new 10GBASE-T PHYs (physical layer transceivers) to our portfolio of networking solutions, which extends energy efficient Ethernet (EEE) at all three operating speeds (1GbE, 10GbE and 100M) while reducing footprint, cost and power consumption for an overall lower operating power by more than 50 percent. This is the first of many exciting announcements leading up to next month's Interop conference in Las Vegas, where Broadcom will be exhibiting and introducing new technologies for the data center, green IT, enterprise. The annual technology show, now in its 27th year, is focused on industry trends such as cloud computing, virtualization, security, mobility and data center advances. We'll be blogging, posting videos, live tweeting and more.Follow us on Facebook , Twitter or Google Plus for regular updates and, of course, the Broadcom Connected blog.To follow along with full</description>
      </item>
      <item>
         <title>StrataDNX: The End of The Bandwidth/Extensibility Tradeoff in the Data Center</title>
         <link>https://www.broadcom.com/blog/stratadnx-the-end-of-the-bandwidthextensibility-tradeoff-in-the</link>
         <guid>https://www.broadcom.com/blog/stratadnx-the-end-of-the-bandwidthextensibility-tradeoff-in-the</guid>
         <pubDate>March 18, 2015</pubDate>
         <description>We all know the etiquette: When youre standing outside an elevator or a subway train, youre supposed to step aside and wait for the people inside to get off before you go in. Of course, sometimes people jump the gun, and theres that awkward squashing of bodies (and flaring of tempers) that sometimes results in doors getting jammed and people getting jostled.In other words, the whole thing turns into a big mess instead of getting people to their destination. While this might be an inconvenience of city life, its analogous to a scenario that happens deep in the vast collection of pipes that make up the core of broadband networks.The same jumble can happen to the packets of data zipping between Ethernet networks (which disembark in your home or office), and optical networks, which help aggregate traffic from a big metropolitan area.If it weren't for a network of technologies that the industry calls the switching fabric, the backbone of the Internet would start to erode. Most people dont get to see inside a data center, but if you opened up the chassis of the equipment they run, youd see line cards and fabric switches, which store, manage and cue the packets of data so that they go where they need to go.Think of them as the elevator operators who used to make sure people got to the proper floor, or that booming voice on the loudspeakers in the subway station announcing the various stops. Today, Broadcom is launching three new additions to its StrataDNX portfolio of switching fabric products that will make sure high-scale networks can handle both their current and future needs -- needs which will make the overcrowding of the average elevator or subway seem like a cushy ride in comparison. These devices will change the way customers</description>
      </item>
      <item>
         <title>Industry Watchers See New Data Center Configurations, Cost Savings Potential for StrataXGS Tomahawk</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/industry-watchers-see-new-data-center-configurations-cost-savings-potential-for-strataxgs-tomahawk/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/industry-watchers-see-new-data-center-configurations-cost-savings-potential-for-strataxgs-tomahawk/</guid>
         <pubDate>October 1, 2014</pubDate>
         <description>Broadcom recently invited market analysts and tech bloggers to a lab in San Jose, Calif.for a first-hand look at its new Ethernet switch family, StrataXGS Tomahawk. In the works for some time, the Tomahawk switch family earned high marks from the handful bloggers and reporters who follow embedded silicon players in the industry. Among other features, they noted its ability to scale, its cost savings potential, and a feature set that enables a smooth transition to software defined networking (SDN). Heres a sampling of what they had to say. James Sullivan, Toms Hardware was impressed by the speed and power of the chip.He wrote: The Tomahawk Series can potentially provide 3.2 Tbps of switching performance, and all this coming from a single chip on a single rack unit.The Tomahawk Switch Series also supports remote direct memory access (RDMA) over converged Ethernet (RoCE), as well as the recent RoCEv2, adding to the high performance potential of the unit. Geared toward the shifting demands of cloud-scale data centers, the Tomahawk switch series provides two tools to give network operators more control and visibility.These were expanded upon by David Chernicoff of ZD Net who wrote: The Tomahawk series offers a new packet-processing engine called FlexXGS that is designed to handle changing workloads by giving datacenter operators detailed control over user-configurable functions.The Broadview instrumentation set allows operators to drill down to get switch level analytics with full visibility of the network.Optimized for SDN application ecosystems the management software provides streaming network congestion detection, packet tracing, link health and utilization monitoring, and application flow and debug statistics. Rivka Gewirtz Little of TechTarget zeroed in on Broadcom's market leadership in SDN-friendly StrataXGS technology for switching: The network switching race is centered on both speed and a new level of programmability that supports automation and dynamic provisioning</description>
      </item>
      <item>
         <title>MWC Demos: RCR Wireless Shows Broadcom's Mobile Tech in Action [VIDEO]</title>
         <link>https://www.broadcom.com/blog/wireless-technology/mwc-demos-rcr-wireless-shows-broadcoms-mobile-tech-in-action-video/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/mwc-demos-rcr-wireless-shows-broadcoms-mobile-tech-in-action-video/</guid>
         <pubDate>March 6, 2013</pubDate>
         <description>Last week, our friends at RCR Wireless stopped in the Broadcom's booth at Mobile World Congress to see our latest tech innovations.The result was a series of compelling video interviews with Broadcom engineers that look at the top trends for mobile technology on display at the show, including GPS, 4G LTE networks and more. First up, Broadcom Senior Technical Director of GPS Frank Van Diggelen explains what geofencing tech is all about and why Broadcom's technology will soon be ubiquitous in phones and tablets: Next, we have Punit Awatramani, a test engineer in Broadcom's Mobile and Wireless Group, demonstrating a speed test for the latest Broadcom 28 nanometer processor chipset for 2G, 3G, and 4G cellular networks: The carrier aggregation capabilities of Broadcom's 4G LTE advanced modem are on display in this next video.Ajay Wadhawan, associate technical director from Broadcom's Mobile &amp; Wireless Group, shows how carrier aggregation allows two non-contiguous bands of spectrum to run at the same time: Finally, what good is all that speed and spectrum crunch-busting tech, if it sucks the life out of the phones battery? Broadcom's chip not only solves that problem, but also accounts for giant battery drain that comes from streaming media on a 4G LTE processor.Broadcom's Wadhawan explains how software embedded into the Broadcom chip manages to be 25 percent more power efficient than competitors offerings: Didnt make it to Barcelona? Get all of the highlights from Mobile World Congress on our dedicated site.Get the latest news from Broadcom by liking us on Facebook, following us on Twitter and reading the blog. Related: Thats a Wrap! Broadcom Makes Headlines at Mobile World Congress Seen at the MWC Broadcom Booth: Preserving Battery Life for 4G LTE Ahead of Mobile World Congress: Broadcom's Latest GPS Tech Zooms in on Geofencing Designed for a</description>
      </item>
      <item>
         <title>Broadcom's 5G WiFi Behind Merus Winning Wi-Fi Access Point Test</title>
         <link>https://www.broadcom.com/blog/wireless-technology/broadcoms-5g-wifi-behind-merus-winning-wi-fi-access-point-test/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/broadcoms-5g-wifi-behind-merus-winning-wi-fi-access-point-test/</guid>
         <pubDate>June 24, 2014</pubDate>
         <description>Weve talked up the benefits of 802.11ac Wi-Fi which Broadcom calls 5G WiFi for some of the most-anticipated smartphones and tablets to hit the market.Consumers are also starting to see how 11ac is improving their home networks, too, with 5G WiFi-powered routers. But many dont know that this faster, longer-range flavor of wireless connectivity is being rolled out at enterprises, college campuses, hospitals and airports via local hubs called Wi-Fi access points (AP), which ensure that mobile users can get their devices connected to the Internet in dense or otherwise crowded places. Sales of 802.11ac access points are on the rise, according to Infonetics.About as many 802.11ac access points shipped in the first quarter of 2014 as did in all of 2013, Infonetics research shows. Driving the uptick are a few big trends: BYOD, or Bring Your Own Device in which employees of big companies continue to bring their own tablets, smartphones and even laptops to work is a big one. BYOD which make for what industry insiders call multi-client environments is where 802.112ac Wi-Fi shines. Employees now have an average of three devices with them at work, and two out of the three will solely access the network via Wi-Fi, said Michael Powell, director of product marketing in the Infrastructure and Networking Group at Broadcom.More and more, we are seeing some enterprises go wireless at the edge of their networks as the mainstream access technology. Thats why its so important to demonstrate that speedy, reliable Wi-Fi connections can make or break the BYOD experience. Broadcom customer Meru Networks made waves last week with a bold claim: that its 802.11ac access points built on Broadcom's BCM43460 5G WiFi chip were the worlds fastest. Meru tapped independent lab The Tolly Group to conduct competitive testing for speed and latency, among other</description>
      </item>
      <item>
         <title>An 'Always-On&quot; World: Enabling GPS and Wearable Sensors with Less Power</title>
         <link>https://www.broadcom.com/blog/wireless-technology/an-always-on-world-enabling-gps-and-wearable-sensors-with-less-power/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/an-always-on-world-enabling-gps-and-wearable-sensors-with-less-power/</guid>
         <pubDate>April 4, 2014</pubDate>
         <description>A few weeks ago, we talked about how Broadcom's bringing more accurate GPS technology to wearable devices. But thats only part of the storysensors play a big role, too. Sensors such as gyroscopes, barometers, accelerometers and others are what enable our connected devices to collect useful data about our bodies and the environment. On our smartphones, sensors do things like recognize gestures, help us get our bearings and send us driving directions. With fitness-trackers, health monitors and other wearables, sensors can collect data such as our heart rates, our blood sugar levels, or perhaps, how many steps we took on a given day. Sensors are important, but only insomuch as they can provide on-point data. Early adopters of todays crop of wearables are finding out that they arent always accurate. GPS + Sensor Hub Broadcom's tackling the problem by combining a GPS solution with sensor hub technology together on one chip. If you put two different activity trackers on your wrist, youll get two completely different answers with regards to the steps that youve taken or the distance you ran, said Steve Malkos, associate director, program management, in the Mobile &amp; Wireless Group at Broadcom. Today, fitness applications, smart watches and activity trackers are all capturing relative information. The ticket to improving accuracy stems from a Broadcom engineering feat the tight integration of GPS and the many sensors in the device to create a low-power solution for mobile devices. Broadcom has called this sensors-plus-GPS technology a Location Hub, according to Malkos. You need a good, calibrated sensor solution to remove the errors in GPS, he said. Its a tightly coupled mixture between the two that continue to calibrate each other, that gives you better results. Smartphones Get Sensor-Smart So now that we know what more accurate GPS can do for</description>
      </item>
      <item>
         <title>Oh, My Stars! Broadcom Adds Galileo Support and Sensors to Low-Power GNSS Chip</title>
         <link>https://www.broadcom.com/blog/wireless-technology/oh-my-stars-broadcom-adds-galileo-support-and-sensors-to-low-power-gnss-chip/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/oh-my-stars-broadcom-adds-galileo-support-and-sensors-to-low-power-gnss-chip/</guid>
         <pubDate>December 2, 2014</pubDate>
         <description>Galileo gave us the astronomical telescope and Jupiters moons. So its fitting that the father of modern science also lends his name to a constellation of satellites thats set to help mobile device-toting consumers get a more accurate picture of their exact location on the Earth. This week, Broadcom announced a new, low-power GNSS chip that supports the EUs Galileo Satellite System, which has six satellites in orbit and is expected to launch more next year. Read the Press Release. This follows the announcement a year ago, when Broadcom unveiled a GNSS chip that helps smartphones deliver up to twice the positioning accuracy by simultaneously supporting multiple constellations, including GPS plus GLONASS (Russia), SBAS (US, Europe, Japan, India), QZSS (Japan) and BeiDou (China). Galileo support helps give mobile devices and the consumers who use them a leg up on more accurate positioning, yet its other features of the BCM4774 that add value to makers of the next generation of sensor-stocked wearables. Broacom recognizes that GPS users care about three things: accuracy, time-to-first-fix and battery life. Because the BCM4774 supports multiple constellations, Broadcom's technology can handpick the best satellites with the strongest signal and the most direct line of sight, according to Prasan Pai, senior director of product marketing. The BCM4774 system-on-a-chip, which combines advanced GNSS, GPS and interfaces to a suite of MEMS-based sensors, can measure things such as direction, speed, altitude and other data points.This capability opens up possibilities for new user experiences, especially for always on applications that run in the background while users go about their day. That trend is starting to play out in what industry watchers foresee as the hottest consumer electronics product category for the upcoming holidays: Fitness trackers and smartwatches. There are precious few wearables on the market that manage to get acceptable</description>
      </item>
      <item>
         <title>A Beautiful Mind: TSN Ethernet debuts, bringing determinism to new markets</title>
         <link>https://www.broadcom.com/blog/a-beautiful-mind-tsn-ethernet-debuts-bringing-determinism-to-new-markets</link>
         <guid>https://www.broadcom.com/blog/a-beautiful-mind-tsn-ethernet-debuts-bringing-determinism-to-new-markets</guid>
         <pubDate>April 28, 2017</pubDate>
         <description>The tale of technology is not just one of science. Our left brains mete out high-order calculus and testing protocols, demanding accuracy and an exacting standard of excellence. But our work as engineers and product designers is also wholly creative. Right brain, left brain – it’s all heavy lifting. When we imagine where these new, advancing technologies will take us, we consider not only what comes next in the linear march, but importantly – what will create new opportunities – new markets – and open doors so we can navigate a different world tomorrow. A TSN Ethernet switch chip is one of those door-opening technologies. First: What is a time sensitive network? TSN stands for Time Sensitive Network, meaning a data network where on-time data arrival is paramount for the applications that use it. The key phrase here is on-time arrival. There is no benefit to arriving early, and late arrival could be highly disruptive to the application. A TSN cares only that data arrive exactly at the right time. This beautiful Ethernet mind is constantly synchronizing both the frequency and phase of time to a master clock, with the option to have multiple time domains (different epochs) also synchronized. These are exciting times for Ethernet, as historically we have never asked it to adhere to an exact timing schema. Time sensitive networking can be thought of as having four main advantages over traditional Ethernet models: Time synchronization – The nodes in the network are synchronized to a master time for the entire network and this global reference time is constantly updated in-band using the same packetized Ethernet network used for data. Pre-emption – This guarantees low latency for high-priority packets even with the presence of interfering, low-priority traffic like having to wait for large “jumbo packets” to pass. Pre-emption</description>
      </item>
      <item>
         <title>Wi-Fi Boost: Turbocharged Home Networking with Six-Stream 802.11ac MIMO</title>
         <link>https://www.broadcom.com/blog/wi-fi-boost-turbocharged-home-networking-with-six-stream-802-11</link>
         <guid>https://www.broadcom.com/blog/wi-fi-boost-turbocharged-home-networking-with-six-stream-802-11</guid>
         <pubDate>April 15, 2014</pubDate>
         <description>With the average consumer household sporting roughly six connected devices, home Wi-Fi networks are tasked with carrying an increasingly heavy load. Smartphones, tablets, PCs, set-top boxes, gaming devices and, now, even a new class of wearables that register health and fitness activities are all tapping the same wireless router. And now that the newest flavor of Wi-Fi 802.11ac, or 5G WiFi is starting to show up in client devices, theres even more pressure on the workhorse router to put on its best performance. The router has to serve many devices old and new and with varying capabilities on the same network.As a result, the overall capacity of the home network declines. Broadcom has a solution for this challenge.The company is unveiling technology thats designed to transform the humble Wi-Fi router into a high-end hub that can better serve every connected device in the home. XStream Specs Called 5G WiFi XStream, the industrys first six-stream 802.11ac Multiple Input Multiple Output (MIMO) platform for home networks was announced by Broadcom today.Its next-gen 5G WiFi specs boast speeds up to 50 percent faster than the most advanced router currently on the market, while also delivering smart software that will enhance in-home wireless experiences. [cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_left&quot;] By offering Wi-Fi data rates up to 3.2 Gbps, the 5G WiFi XStream platform is ready for the most demanding wireless network loads.Broadcom's Intelligent Quality of Service (iQoS) feature acts like an Internet traffic cop of sorts, identifying incoming data traffic and allocating bandwidth so that video streaming sites get the highest priority and have the fewest hiccups, while other types of applications, like a file download, get lower priority. Meanwhile, Broadcom's SmartConnect software works to separate devices running at the faster 802.11ac from those using older, slower Wi-Fi connections, while still offering double the performance for</description>
      </item>
      <item>
         <title>Going NetXtreme: Dell and HP tap Broadcom for enhanced server technology</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/going-netxtreme-dell-and-hp-tap-broadcom-for-enhanced-server-technology/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/going-netxtreme-dell-and-hp-tap-broadcom-for-enhanced-server-technology/</guid>
         <pubDate>March 7, 2012</pubDate>
         <description>It's been a busy week at Broadcom and the server industry at large with Broadcom making major announcements around advanced Ethernet technology solutions for both Dell's and HP's new line of servers. The first announcement was a new portfolio of best-in-class 1GbE and 10GbE Ethernet adapters for Dell's 12th-Generation PowerEdge servers.These adapters feature Broadcom's latest NetXtreme controllers, as well as the highest performance networking silicon, and allows Dell to boost performance and further improve efficiency and scalability of enterprise servers.(See video below.) The second announcement was to introduce a new portfolio of adapters and LOM (LAN on Motherboard) for HP's new ProLiant Gen8 servers.Broadcom's five new adapters for the HP line are available in a variety of dual-port and quad-port network interface cards (NICs), as well as FlexNet cards and mezzanine cards.They are designed for a wide range of network configurations and provide full compatibility with HPs new servers. Broadcom's new technology enables IT managers to maximize performance, efficiency and scalability in industry news servers from Dell and HP.The performance delivered is with industry-leading speed and full line-rate throughput across all ports. Our new 10GbE Ethernet adapters deliver up to 37 percent faster throughput than the nearest competitor.Consider the results of these third-party benchmark tests conducted by Demartek that compared Broadcom's adapter to the equivalent adapter offered by Emulex. In addition, these adapters provide full HBA-level offload support of iSCSI and FCoE storage protocols that drastically reduce CPU and memory bandwidth utilization while delivering up to 1.5 Million iSCSI I/Os per second (IOPS): 120 percent higher than the nearest competitor! The adapters also offer advanced functionality, like switch-independent NIC partitioning (for I/O virtualization) and IEEE 1588 time synchronization.They also implement Energy Efficient Ethernet (EEE) technology that enables ports to use up to 42 percent less power while lowering IT operating</description>
      </item>
      <item>
         <title>VMworld Preview: How the Cloud is Reshaping Data Centers</title>
         <link>https://www.broadcom.com/blog/vmworld-preview-how-the-cloud-is-reshaping-data-centers</link>
         <guid>https://www.broadcom.com/blog/vmworld-preview-how-the-cloud-is-reshaping-data-centers</guid>
         <pubDate>August 8, 2012</pubDate>
         <description>Todays networks require engineers and managers alike to be masters of complexity: They manage the intricacies of traditional silicon, system-imposed performance barriers and server-to-server and server-to-storage communication. In the future, data centers managers will be challenged to scale their networks and find efficiencies at the silicon level.The explosion in demand for public and private clouds (which Cisco predicts will increase at a 50 percent CAGR in the next three years), Web 2.0 and other high-bandwidth data center applications, the usual balancing act is destined to become a tightrope walk. The resulting scalability and efficiency demands on both traditional enterprise and emerging mega-data centers call for a new era of cloud-scale networking.Even optimized, efficient enterprise networks are taking a fresh look at their data centers as a source for growth, competitive advantage and return on IT investment. Cloud-scale data centers need network virtualization and switch forwarding features that can support a large number of servers and users.Energy-efficient design and cost pressures are further driving down power, cooling, space and form factor metrics.These, in turn, are mandating advanced silicon IP and integration know-how and innovation. Throughout the month, well take a closer look on at how Broadcom engineers are architecting the transformation of cloud-scale networking with a creative approach to improving performance and agility. Stay tuned to see Broadcom's cloud solutions in action, as we demonstrate our entire line of ground-breaking virtualization technology at VMworld 2012 in San Francisco this month. If you cant stop by our booth (#2017) in person, follow us on Twitter and Facebook for updates from the show floor. Related: EE Times: Network switch device equipment balances performance, cost and power in the cloud Electronic Design: Network Infrastructure: Virtualization Requirements Fuel Network Switch Design EE Times: Data center transformation--Sophisticated switching drives fast, fat, and flat White Paper: Avoiding</description>
      </item>
      <item>
         <title>Innovation at Its Best: The Power of IP Sharing</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/innovation-at-its-best-the-power-of-ip-sharing/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/innovation-at-its-best-the-power-of-ip-sharing/</guid>
         <pubDate>June 18, 2012</pubDate>
         <description>The key to true technology innovation is collaboration and the sharing of intellectual property with colleagues and other industry visionaries. Thats the message that Broadcom co-founder and CTO Henry Samueli conveyed in a recent article and one that is actively being promoted by the World Intellectual Property Organization (WIPO), which is seeking input on new ways to drive more collaboration among intellectual property organizations around the world. [caption id=&quot;attachment_3099&quot; align=&quot;alignright&quot; width=&quot;300&quot;] Click to see graphic[/caption] The need for IP sharing is more important now than ever as the complexity and capabilities of integrated circuits continue to increase at an astonishing pace, with one million transistor chips in the early 1990s evolving to one billion transistor chips today. Because todays complex chip designs often leverage open-market IP for reuse of standard functional blocks, the time-to-market and development efficiency are improved. At Broadcom, collaboration is alive and well within the organization.In fact, leveraging existing IP from the companys own broad portfolio of engineering expertise is as easy as logging on to a centralized portal. Broadcom has developed a unique and efficient system for IP sharing and unified design flows through its centralized engineering organization and IP library. This unified design approach encourages collaboration across business units, eliminates areas of potential overlap, dramatically reduces engineering costs and time-to-market, and provides the company with a powerfully competitive advantage. For example, Broadcom's unique ability to combine the best of IP from throughout the organization is the introduction of the new StrataGX and BCM4708x System on a Chip (SoC) devices the industrys first to combine a high performance processor, Gigabit Ethernet (GbE) switch, GbE physical layer transceivers (PHYs), USB 3.0 and traffic accelerators all on a single chip. Engineers from two of Broadcom's business groups the Infrastructure &amp; Networking group and Mobile and Wireless group -</description>
      </item>
      <item>
         <title>On Deck at Interop 2013: Simplifying With SDN</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/on-deck-at-interop-2013-simplifying-with-sdn/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/on-deck-at-interop-2013-simplifying-with-sdn/</guid>
         <pubDate>May 7, 2013</pubDate>
         <description>Is Software Defined Networking, or SDN, ready for prime-time? Its a question that was posed to a panel of industry experts at the Interop conference in Las Vegas today and one thats not easily answered. [caption id=&quot;&quot; align=&quot;alignright&quot; width=&quot;275&quot;] Broadcom's Ram Velaga (center) sits on a panel about SDN at Interop 2013[/caption] But, Ram Velaga, Broadcom's vice president and general manager of network switches in the Infrastructure &amp; Networking Group and a panelist at todays session, offered up a simple and perhaps bettercounterpoint to the complex question: Industry adoption of Software Defined Networking is a matter of when, not if. SDN is here to stay, he said.The rise of Software Defined Networking is evident at the conference, which kicked off today, as crowds of network administrators and other IT professionals consider how, and to what degree, their organizations should invest in the promise of SDN: simplifying network management. Read More: SDN: A Sea Change in the Data Center But there are still challenges that remain with SDN, largely because companies are still learning about how to best deploy SDN and how it relates to other, existing technologies in the network, such as virtualization and application processing. Velaga noted that most of the questions that customers have center around agility and performance at the application layers.Companies want to increase the utilization of all of their networking assets, not just the data center, he said.Velaga explained that SDN is just one piece of a larger evolution in the Enterprise toward app mobility and agility (see the BYOD trend), not just asset utilization. The panel addressed a number of topics, including security concerns, firewalls, network appliances and quality of service.While the industry grapples with the many uncertainties around SDN, its many benefits cant be ignored.It has certainly piqued the interest of network administrators</description>
      </item>
      <item>
         <title>Broadcom, NETGEAR Demonstrate the Power of 5G WiFi</title>
         <link>https://www.broadcom.com/blog/wireless-technology/broadcom-netgear-showcase-the-power-of-5g-wifi/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/broadcom-netgear-showcase-the-power-of-5g-wifi/</guid>
         <pubDate>May 16, 2012</pubDate>
         <description>A group of journalists got their first look at the power of next-generation Wi-Fi, or 802.11ac, during a demo event in San Francisco this week.Based on the early reports, it appears that they agree with us that 5G WiFi is ready for prime time and that the market is ready for 5G WiFi. The event, which was held in a hip art gallery in San Francisco's SOMA district, kicked off with a welcome by Broadcom Senior Vice President Michael Hurlston, who took some time to explain how we got to this point in Wi-Fi's evolution.We live in an age where devices galore - from PCs and smartphones to living room set-top boxes and gaming consoles - are all tapping into the Wi-Fi networks, he said. Consumers are uploading videos to YouTube, streaming high-def video from Netflix, sharing photos over Facebook and streaming music from Pandora, among many other things.For those experiences - and new ones we've yet to hear about - to continue to prosper, the wireless pipeline to the Internet needs to be faster, stronger and more reliable.That's where 5G WiFi comes in. Also see: 5G WiFi: Introducing a Wi-Fi Powerful Enough to Handle Next-Gen Devices and Demands See photos from the event on Broadcom's Facebook Page The event featured some comments by NETGEAR CEO Patrick Lo and some product demonstrations by David Henry, NETGEAR's Vice President of Product Management, who offered some real-time demonstrations of files being transferred over the existing Wi-Fi - 802.11n - and the new 5G WiFi.When it comes to speed, there was no contest, 5G WiFi transferred large files in a fraction of the time it took for the transfers over 802.11n. The opening lines of a blog post on laptopmag.com pretty much summed it up: &quot;Unless you want to be wedded to 802.11ns</description>
      </item>
      <item>
         <title>Why Unlicensed Spectrum Allocation is Critical to the Next Wave of Innovation</title>
         <link>https://www.broadcom.com/blog/wireless-technology/why-unlicensed-spectrum-allocation-is-critical-to-the-next-wave-of-innovation/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/why-unlicensed-spectrum-allocation-is-critical-to-the-next-wave-of-innovation/</guid>
         <pubDate>July 15, 2014</pubDate>
         <description>Over the years, consumers have been quick to adopt the technologies made possible by wireless spectrum, the invisible airwave frequencies that deliver everything from radio and TV broadcasts to Wi-Fi Internet connections. Yet, as more people become armed with more wireless devices, the congestion across these frequencies increases and the performance of the signals begins to erode. [cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_left&quot;] Ever try to upload a photo to a social media site from a crowded event? Chances are that it took several tries for your upload to push its way through the data-crunched network.Or, perhaps you find yourself competing with your kids for Internet connectivity at home.One child is streaming Netflix, the other is trying to conduct Internet research with a tablet, and you are simply trying to catch up on the news and waiting for those headlines to load. While those scenarios are frustrating, the emerging Internet of Things (IoT) an onslaught of new Internet-connected devices such as home appliances and fitness monitors is expected to clog the airwaves even more.With industry watchers heralding some 50 billion connected devices by 2020, the need for additional licensed and unlicensed spectrum for consumer use is clear. Benefits of Licensed and Unlicensed Spectrum In the U.S., the Federal Communications Committee (FCC) makes decisions about how to allocate the two flavors of wireless spectrum licensed and unlicensed. Licensed spectrum, used for services like TV broadcasting, commercial radio and cellular voice and data, is auctioned off by the FCC.The auctions give companies and organizations exclusive use of a particular frequency band of spectrum over a set period of time.By allotting certain frequencies for voice and data transmission, cellular carriers have been able to guarantee a quality of service and expand their customer base, as well as their product offerings. Unlicensed spectrum, on the other hand,</description>
      </item>
      <item>
         <title>Putting Wearables into Context with Low-Power GNSS</title>
         <link>https://www.broadcom.com/blog/wireless-technology/putting-wearables-into-context-with-low-power-gnss/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/putting-wearables-into-context-with-low-power-gnss/</guid>
         <pubDate>September 14, 2015</pubDate>
         <description>When it comes to small wearable devices think fitness bands, smartwatches, kid-trackers or heart rate monitors location accuracy and battery life really matter.To help improve both, Broadcom is developing technology that links user behavior to device performance. When a device understands what the user is doing taking a leisurely stroll, jogging a 10K or sitting in a moving car it can more efficiently monitor and record location and distance data without placing an extra strain on the battery.Thats the innovation in Broadcom's BCM47748, a new Global Navigation Satellite System (GNSS) chip for Internet of Things (IoT) and wearable devices thats being unveiled this week at the Institute of Navigation (ION) GNSS conference. Geared toward a variety of IoT devices (such as drones) and small wearables, the BCM47748 simplifies the job of getting GPS features into devices that arent as large or as complex as smartphones, said Prasan Pai, senior director of wireless connectivity at Broadcom. The shortage of circuit board real estate is a big challenge for designers of these products, who have sometimes had to choose between form and function.In general, the smaller the device, the slimmer and more lightweight its battery must be.As a result, the more often it will need charging.And when power-intensive tasks, such as distance and position tracking, are added into the equation, the end result is usually poor performance. The BCM47748 is ideal for products that are too small for anything but very simple processors, Pai said. On the accuracy side, the improvement is the result of a Broadcom engineering feat tightly integrating GPS with an array of sensors to create a low-power solution where satellites are only monitored when needed. The device worn by someone sitting at a desk wont monitor the satellites more than once but the device worn by someone out</description>
      </item>
      <item>
         <title>MWC: Femtocells rescue cell phones from dropped calls, slow downloads</title>
         <link>https://www.broadcom.com/blog/mwc-femtocells-rescue-cell-phones-from-dropped-calls-slow-downloads</link>
         <guid>https://www.broadcom.com/blog/mwc-femtocells-rescue-cell-phones-from-dropped-calls-slow-downloads</guid>
         <pubDate>February 27, 2012</pubDate>
         <description>As phones have grown faster and more powerful, our patience with dropped calls for slow downloads is very little.But, alas, femtocells - small cells that increase cell coverage in homes or office buildings - can solve some of those problems.

At Mobile World Congress in Barcelona this week, NETGEAR announced the worlds first fully integrated quad-play small cell home gateway for the Alcatel-Lucent Global Network, powered by Broadcom's DSL, Wi-Fi and small cell platforms.This means better network coverage, longer battery life and faster data services on smart phones.

This happens because service providers can now expand 3G HSPA+ mobile coverage beyond the reach of their existing footprint to fixed line customers at home.They can assure Quality of Service to mobile devices and seamlessly move traffic from macrocells to small cells - making better use of available spectrum and helping service providers to offer new services.

The new NETGEAR DEVG2000F Small Cell Gateway, which is now shipping, provides reliable, affordable and consistent 5-bar 3G coverage, as well as data and voice services from one device.

Related coverage:

	Broadband at Mobile World Congress
	MWC: Broadcom targets mobile network demands with enhanced backhaul technology

 

 </description>
      </item>
      <item>
         <title>New Trident 3 switch delivers smarter programmability for enterprise and service provider datacenters</title>
         <link>https://www.broadcom.com/blog/new-trident-3-switch-delivers-smarter-programmability-for-enterprise-and-service-provider-datacenters</link>
         <guid>https://www.broadcom.com/blog/new-trident-3-switch-delivers-smarter-programmability-for-enterprise-and-service-provider-datacenters</guid>
         <pubDate>June 14, 2017</pubDate>
         <description>Protecting customers’ switch investments with smart, data plane programmability, Broadcom introduces the Trident 3, its new 3.2Tbps, 25G SerDes fully programmable switch –– now available to data center, enterprise, and service provider networks transitioning to high density 10/25/100G Ethernet. The Trident 3 builds on the success of the market-lauded and well-established Trident and Tomahawk® product lines, both in the StrataXGS® family of products. This high-performance switch chip includes data plane programmability, which is of critical importance for customers seeking to future-proof their switch architecture decisions and who also desire quick to-market deployment times. The Trident 3 family of switches offers 200Gbps to 3.2Tbps of scalable throughput with reconfigurable on-chip databases, best-in-class load balancing, and rich embedded instrumentation for network visibility. Smarter programmability The Trident 3 is fully programmable, maintaining 100 percent backward compatibility to the existing install base of networks based on StrataXGS Trident and Trident 2. Looking forward, new switching and instrumentation features can be seamlessly integrated via fully verified, in-field upgrades, just as easily as you might update the software – all with no changes to the underlying hardware platform. This unique backward/forward duality of Trident 3 means designers can evaluate the StrataXGS family in terms of both scale and cost efficiency alongside Broadcom’s usual performance superlatives – all at the fastest time-to-deployment for customers. Ram Velaga, senior vice president and general manager of Broadcom’s Core Switching Group, says “With Trident 3, our customers can leverage a single development to yield a complete line of programmable switching platforms with the same rich feature set extending all the way from the service provider edge, to the data center, converged campus core, and wiring closet.” Significantly, Trident 3 offers highly parallelized packet processing engines with multiple lookups per clock and centralized, fungible, shared databases. The efficient implementation of programmable engines</description>
      </item>
      <item>
         <title>In-Car Infotainment: Not Just a Head Unit Anymore</title>
         <link>https://www.broadcom.com/blog/in-car-infotainment-not-just-a-head-unit-any-more</link>
         <guid>https://www.broadcom.com/blog/in-car-infotainment-not-just-a-head-unit-any-more</guid>
         <pubDate>January 10, 2014</pubDate>
         <description>LAS VEGAS Getting safely from point A to point B in the connected car is more complex than ever, thanks to a new crop of sophisticated technology that's set to give drivers and passengers alike a more personalized and entertaining ride. For one, communications technology in the car is expanding the industry's definition of &quot;infotainment.&quot; [caption id=&quot;attachment_11000&quot; align=&quot;alignright&quot; width=&quot;300&quot;] Eric Riyahi, executive vice president of global operations at Parrot.[/caption] Showing off the proof of concept on a packed show floor at the Consumer Electronics Show this week, Parrot SA Vice President of Global Operations Eric Riyahi explained that infotainment in the car is not just a head unit anymore. Modern cars must support a lot of communications between different devices: smartphones, tablets, sensors, cameras infotainment systems, on-board diagnostics, and automated driver assistance systems.Parrots goal was to tie that all together and create a network in the car comparable to what you have at home. Related: Parrot and Broadcom Showcase AVB Technology for Automotive Infotainment But building an in-car network also presents significant challenges that home networks dont have to deal with including weight, reliability, and resistance to delays and glitches. Thats the driving force the behind Paris-based Parrots new Ethernet-based Audio Video Bridging (AVB) solution using Broadcom's BroadR-Reach automotive Ethernet tech. AVBs deterministic network management lets carmakers define precise timing for streams between specific network nodes and enforce the bandwidth between them no matter what competing traffic happens to be on the network. A Matter of Priorities In a car, lots of devices create data, and the system has to prioritize the traffic so the driver always gets what she needs.The central, essential data stream to the driver is always protected, he said, resulting in zero packet loss and fast delivery of the video stream. [cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_left&quot;]Prioritization is essential</description>
      </item>
      <item>
         <title>President Obama to Accelerate Broadband Deployment with Executive Order</title>
         <link>https://www.broadcom.com/blog/president-obama-to-accelerate-broadband-deployment-with-executi</link>
         <guid>https://www.broadcom.com/blog/president-obama-to-accelerate-broadband-deployment-with-executi</guid>
         <pubDate>June 13, 2012</pubDate>
         <description>The White House is about to give Americans a big Broadband boost. President Obama will sign an Executive Order this week that will accelerate the deployment of broadband networks nationwide and create a new public-private partnership, called US Ignite, to develop applications that will run on those new networks. The Executive Order, which was announced by the White House today, makes broadband construction along Federal roadways and properties up to 90 percent cheaper and more efficient, which will make it easier for broadband carriers to deploy networks on Federal properties and roads and speed the delivery of connectivity to communities, businesses and schools.US Ignite, which is comprised of nearly 100 cities, corporations and non-profit entities, focuses on developing applications for advanced manufacturing, medical monitoring, emergency preparedness and other services that will run on the broadband networks.So far, more than $60 million has been committed to the cause along with various other partners offering subsidies. Both announcements validate the very reason Broadcom exists: to connect people to the content that matters most to them.Unfortunately, it is still challenging for many people in the U.S.to connect to broadband networks at home, work, or on the go; especially as bandwidth-hungry applications and multimedia continue increasing in use.While the U.S.population has reached 313 million people, its staggering that only 245 million of those people use the Internet. YouTube's streaming of the 2012 Summer Olympics illustrates the need for ubiquitous connectivity and high-speed services to match the growing broadband needs of content providers and explosive growth of high-speed content consumption by consumers. The majority of the worlds infrastructure hardware runs on Broadcom technology.In fact, 99.98% of all internet traffic crosses a Broadcom chip at one point or another.Thats impressive.But whats most important are the people who arent benefiting from Broadcom's technology to learn, share, connect</description>
      </item>
      <item>
         <title>Broadcom HR Exec Terri Timberman Named Among &quot;Most Influential Women of California&quot;</title>
         <link>https://www.broadcom.com/blog/broadcom-hr-exec-terri-timberman-named-among-most-influential-w</link>
         <guid>https://www.broadcom.com/blog/broadcom-hr-exec-terri-timberman-named-among-most-influential-w</guid>
         <pubDate>February 1, 2012</pubDate>
         <description>Top leaders from the likes of eBay Inc., Gap Inc., TiVo Inc.and Google Inc.will gather February 2-3 at the University of California, Berkeley, for the Annual California Diversity &amp; Leadership Conference.

Alongside women executives from other industry-leading tech companies, including Hewlett-Packard Co., McAfee Inc., Ingram Micro Inc.and Sony Electronics Inc., Broadcom Executive Vice President of Human Resources Terri Timberman is being named one of the &quot;Most Powerful and Influential Women of California&quot; by the California Diversity Council.

&quot;Open dialogue is essential for progress,&quot; said Dennis Kennedy, founder and CEO of the National Diversity Council.&quot;If companies/organizations do not learn how to successfully leverage and manage diversity, it will be very challenging for them to stay competitive.&quot;

Timberman attributes her success to hiring great staff that share her values and are willing to take risks.
&quot;From hunting and hiring the best in the industry, to designing training and development programs, the goal is to create an environment and systems that enable employees to do their best and achieve their potential, which ultimately drives success,&quot; Timberman said.
The California Diversity Council is a non-profit organization that champions diversity and inclusion throughout California's academia, businesses and communities.

The conference theme, &quot;Deconstructing the California Glass Ceiling,&quot; focuses on creating a solid network to create career advancement.Awardees for the &quot;Most Powerful and Influential Women of California,&quot; &quot;Multicultural Leadership&quot; and &quot;DiversityFIRST&quot; will be recognized during the conference.

Read more about Terri Timberman in her bio.

Read the news release (PDF).</description>
      </item>
      <item>
         <title>Model B+: Raspberry Pi Ups its Game with New Improvements</title>
         <link>https://www.broadcom.com/blog/broadcom-innovation/model-b-raspberry-pi-ups-its-game-with-new-improvements/</link>
         <guid>https://www.broadcom.com/blog/broadcom-innovation/model-b-raspberry-pi-ups-its-game-with-new-improvements/</guid>
         <pubDate>July 17, 2014</pubDate>
         <description>The Raspberry Pi an affordable, bare-bones computer thats about the size of a credit card hasnt changed much since its release more than two years ago.This week, the beloved microcomputer got a mini-facelift, and the result is the new and improved Raspberry Pi Model B+. Broadcom technical director Eben Upton, who is also the co-founder and public face of the U.K.-based Raspberry Pi Foundation, declared that this isnt a Raspberry Pi 2, but rather the final evolution of the original Raspberry Pi. All of the components of the original Raspberry Pi are still there: Broadcom's BCM2835 processor, 512 MB of RAM, the same powerful software and the same $35 price tag.The Raspberry Pi Model B+ is outfitted with more ports and uses less power and includes a slew of other tweaks that make it even more dynamic and user-friendly. Consider these features, as noted in the official Raspberry Pi blog: More GPIO: The general purpose input/output header has grown to 40 pins (14 more than the previous model). More USB: There are now four USB 2.0 ports, compared with two on the earlier model. Micro SD: The old friction-fit SD card socket has been replaced with a much nicer push-push micro SD version. Lower power consumption: By replacing linear regulators with switching ones, power consumption has been reduced by between 0.5 and 1 watt. Better audio: The audio circuit incorporates a dedicated low-noise power supply. Neater form factor: The USB connectors have been aligned with the board edge, composite video has been moved onto the 3.5mm jack, and four squarely-placed mounting holes have been added. One of the biggest enhancements is its lower power consumption.By using switching regulators instead of linear ones, the Model B+ offers more efficient power management. This means users can manage more devices while using the</description>
      </item>
      <item>
         <title>Broadcom at Interop: Energy Efficient Ethernet is Good for the Planet</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/broadcom-at-interop-energy-efficient-ethernet-is-good-for-the-planet/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/broadcom-at-interop-energy-efficient-ethernet-is-good-for-the-planet/</guid>
         <pubDate>April 25, 2012</pubDate>
         <description>For many organizations, concerns about rising energy costs and environmental impact remain top-of-mind. At Broadcom, engineers have been thinking about how energy efficient networking can dramatically reduce an organizations carbon footprint while also making a significant impact on the bottom line. Estimates show that more than 150 million metric tons of carbon dioxide (CO2) are used to power IT equipment with a global price tag of about $16 billion per year.Business-as-usual projections foresee a 130 percent rise in CO2 emissions by 2050. So what is Broadcom doing to make a difference? This week, in the lead up to next months Interop conference in Las Vegas, Broadcom has announced the addition of four new green chips for enterprise, small- and medium-sized businesses and service providers to our growing portfolio of Energy Efficient Ethernet offerings. Broadcom's extensive EEE portfolio the broadest in the industry goes beyond industry standards to significantly reduce power consumption of network devices. Broadcom's latest 10/100/1000BASE-T physical layer transceivers (PHYs) are built with the objective of lowering the operating power by more than 40 percent when the network is active and up to 70 percent during periods of reduced link utilization.In the U.S.alone, this could translate into a reduction of CO2 emissions by up to 2.85 million metric tons. How much of an impact does that sort of reduction drive? Consider these others ways to quantify 2.85 million metric tons: Annual greenhouse gas emissions from 495,000 passenger vehicles CO2 emissions from 291 million gallons of gasoline The electricity use of 314,000 homes for one year Carbon sequestered by 66 million tree seedlings grown for ten years Carbon sequestered annually by 551,000 acres of pine or fir forests To learn more about Broadcom's Energy Efficient Ethernet offerings, visit our Interop event page or stop by our booth at the show</description>
      </item>
      <item>
         <title>Cloud Scale Networking: Transforming the Data Center Landscape</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/cloud-scale-networking-transforming-the-data-center-landscape/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/cloud-scale-networking-transforming-the-data-center-landscape/</guid>
         <pubDate>April 1, 2014</pubDate>
         <description>The network makes or breaks the cloud. Thats the sentiment echoed by Nick Ilyadis, vice president and chief technology officer in the Infrastructure &amp; Networking Group at Broadcom, and it underscores some very real challenges facing modern CIOs, network administrators and IT departments alike. Its also a driver behind Broadcom's recent Interop announcement, a powerful new XLP532 8-core communications processor promising to simplify deployment of Network Functions Virtualization (NFV) and Software Defined Networking (SDN) for mid-size enterprises. The cloud only works if the network is scaled out to enable applications and data to flow between the clouds servers and users, Ilyadis said.If you have a server sitting on a very low bandwidth connection, for example, the cloud wont work.So, the network is really the key enabler. [cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_right&quot;] Those critical networks, and the data centers that house them, are facing some challenges. Workloads are changing in unpredictable ways as trends such as BYOD and location-aware mobile apps ramp up.The demands put onto the cloud are constantly evolving and the scale is ever-growing; making it difficult for service providers and application developers to know what they may need to run efficient networks. At the same time, the amount of data volume carried by networks has exploded.Cisco estimated last year that by 2017, data centers will handle some 7.7 zetabytes of IP traffic, two thirds of which would be on account of cloud computing. Networks Under Pressure Networks can choke under this pressure, bringing down service level agreements and lowering asset utilization.And, they are prone to inefficient scaling, meaning that if the network isnt built efficiently it can cost more to build, grow and operate.But it doesnt have to be that way which is where much of Broadcom's work comes in. Networks can be designed differently.Legacy, tiered, network designs can be</description>
      </item>
      <item>
         <title>BroadView: The Open Source Analytics Advantage Coming to a Network Near You</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/broadview-the-open-source-analytics-advantage-coming-to-a-network-near-you/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/broadview-the-open-source-analytics-advantage-coming-to-a-network-near-you/</guid>
         <pubDate>November 9, 2015</pubDate>
         <description>There are two words that strike fear into the hearts of data center operators and other professionals responsible for keeping the cloud up and running: Black holes. Consumers are already familiar with these deep-space mysteries, which generate such enormous gravitational pull that not even light particles can escape them.Back down in Earthly data centers, black holes also describe what happens when a packet of data gets dropped as traffic moves through a network. There are a number of reasons why data gets lost.It could have something to do with a dead IP address, congestion, or some other outage.Yet in many cases, those running a data center may have no way of knowing why some traffic disappears into a black hole. To help give a deeper look into such networks, Broadcom's BroadView instrumentation software suite is getting an upgrade today. BroadView aims to empower network administrators to monitor networks in a proactive manner, trace packets better and make data center analytic tools work for them.Think of it as a flashlight that not only shines into the black hole, but also makes sure nothing else can disappear inside one. BroadView Instrumentation On a technical level, BroadView is a collection of open-source software, plugins to multiple ecosystem projects (such as OpenDaylight and OpenStack), and documentation.It offers programmable access to the internal workings of switching architecture for enhanced network control tasks such as monitoring, congestion control and advanced troubleshooting. Lost or dropped packets may sound like a minor nuisance, but they can mean delays in performance across a network, which has an impact on everyday services that data centers provide.Transparency into the network isnt just a nice-to-have.Its increasingly becoming a cornerstone for public cloud storage services. Companies that outsource their data centers with Amazon Web Services, VMWares vCloudAir, Microsofts Azure and others, are driving</description>
      </item>
      <item>
         <title>Broadcom at Computex: 5G WiFi and Gigabit Throughput [Video]</title>
         <link>https://www.broadcom.com/blog/wireless-technology/broadcom-at-computex-5g-wifi-and-gigabit-throughput-video/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/broadcom-at-computex-5g-wifi-and-gigabit-throughput-video/</guid>
         <pubDate>June 11, 2012</pubDate>
         <description>At the Computex conference in Taipei last week, the folks from FirstZoom.tv stopped by the booth for an up-close look at some of Broadcom's technologies up-close.

The first stop was by the 5G WiFi Demo area, where our own Dino Bekis showcased the power of 5G WiFi in action.



Next up was Ed Doe, who showcased a home network environment and how the experience can be enhanced by a network accelerator that pushes full gigabit-speed throughput.



Related posts:

	Broadcom at Computex: Unleashing the Power of 5G WiFi
	5G WiFi: Introducing a Wi-Fi Powerful Enough to Handle Next-Gen Devices and Demands

Industry Buzz:

	eWEEK: Broadcom Unveils Integrated 5G WiFi SoCs for SMBs, Home Offices
	EE Daily News, Broadcom adds 802.11ac WiFi SoCs for SMB and home routers
	The Tech Report: New Broadcom SoCs promise cheaper 802.11ac routers
</description>
      </item>
      <item>
         <title>5G WiFi Momentum: Smartphones Spark 802.11ac Adoption</title>
         <link>https://www.broadcom.com/blog/wireless-technology/5g-wifi-momentum-smartphones-spark-802-11ac-adoption/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/5g-wifi-momentum-smartphones-spark-802-11ac-adoption/</guid>
         <pubDate>July 12, 2013</pubDate>
         <description>Our smartphones are our always-on companions they are the key to our schedules, the lens to our world and the connection to our friends, family and colleagues.Increasingly, theyre also becoming drivers of new, innovative connectivity technologies. Consider how the next generation of Wi-Fi an engineering standard called 802.11ac, or 5G WiFi is gaining ground via the smartphone and consumer demand for faster downloads, more robust connectivity and improved power efficiency. The first routers and laptops to offer 5G WiFi hit the scene more than a year ago, opening the new access doors to this next-generation standard.But its the arrival of mobile devices such as Samsungs Galaxy S4 and the HTC One that are seen as drivers of an enhanced mobile experience powered by 5G WiFi. So what, exactly, is it that differentiates 5G WiFi from its predecessors and worthy of so much attention? Rahul Patel, Broadcom Vice President, Wireless Connectivity Combos in the Mobile &amp; Wireless Group explained it best: The 5G WiFi predecessor, called the 802.11n standard, operates primarily in the 2.4GHz frequency band, which often sees interference from Bluetooth headsets, microwave ovens, baby monitors and other Wi-Fi-enabled devices. 5G WiFi, by contrast, operates on the 5GHz band, a less congested spectrum that provides faster data rates and broader coverage with fewer dead spots.5G WiFi uses an 80 MHz channel bandwidth that is two times wider than the channel on similar products with 802.11n a game-changer for bandwidth-hungry tasks such as downloading HD movies. Recently, Wi-Fi Alliance, an industry group thats spurring along the adoption curve, selected Broadcom's 5G WiFi for its certification program, meaning Broadcom's chip is now certified and will be used as a benchmark to validate interoperability among 802.11ac products..And Broadcom is helping to accelerate adoption by pumping out two new 5G Wi-Fi combo chips with</description>
      </item>
      <item>
         <title>Samsung Galaxy S5 Hits Stores, Chock Full of Broadcom Tech</title>
         <link>https://www.broadcom.com/blog/wireless-technology/samsung-galaxy-s5-hits-stores-chock-full-of-broadcom-tech/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/samsung-galaxy-s5-hits-stores-chock-full-of-broadcom-tech/</guid>
         <pubDate>April 11, 2014</pubDate>
         <description>One of the most-anticipated smartphones of the season hits the store shelves today.And thanks to the techies who take apart products to see whats inside, consumers already know that the Samsung Galaxy S5 is much more than a pretty screen. First teased at Mobile World Congress, the Android-based Galaxy S5 has been generating quite a bit of buzz in recent weeks as device enthusiasts have been eager to know whats inside this newest smartphone.Now, its been revealed the Galaxy S5 is chock full of some of Broadcom's most innovative technologies. 2x2 MIMO for Next-Gen Wi-Fi Specifically, enthusiasts are noting the inclusion of Broadcom's BCM4354, a combo chip that brings the 2x2 MIMO antenna technology thats responsible for the increased speed, range and throughput of the 5G WiFi (802.11ac) connectivity in the phone. Understanding 2x2 MIMO: Download and share the infographic The BCM4354 enables smartphones like the S5 to find and keep a clearer and more powerful Wi-Fi signal, essentially doubling wireless performance and widening coverage areas, while increasing system power efficiency. For consumers, that means that their mobile Wi-Fi experience gets markedly better, enabling them to do things like send photos from crowded events (like a concert), eliminate dead spots where the routers signal cant reach in their homes, and get faster downloads and Web surfing while on the go. Engineering MIMO antenna technology can be challenging in a small form factor phone, said David Recker, senior director of product marketing in the Mobile &amp; Wireless Group at Broadcom.However, the S5 has proven that this is possible with Broadcom's 5G WiFi MIMO technology. [cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_left&quot;] Recker added that Samsung gave the consumer Wi-Fi experience top billing for the new phone. Its the first phone in the world to launch with 2x2 MIMO, he said.Its designed to give the consumer</description>
      </item>
      <item>
         <title>Runners' Peeve: Lagging GPS?</title>
         <link>https://www.broadcom.com/blog/wireless-technology/runners-peeve-lagging-gps/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/runners-peeve-lagging-gps/</guid>
         <pubDate>April 4, 2014</pubDate>
         <description>Ever see a gaggle of marathon runners at the starting line holding their smartwatches skyward? Theyre looking for that first GPS satellite ping, the one that enables their gadgets to get a fix on their locations. For outdoor runners, GPS gets hung up on two problems. [cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_right&quot;] For one, satellite-based GPS is often slow to pick up on the first fix to begin tracking a route.Second, once it has a fix on location, GPS can experience interference as signals get blocked by some buildings and bounce off of others. The latter problem is especially vexing for runners using smartwatchers or other wearables to monitor their pace accurately, track their workouts or qualify for a race, according to Frank van Diggelen, vice president of technology, GPS, in the Mobile &amp; Wireless Group at Broadcom. Broadcom's technology which combines accelerometers with GPS aims to mitigate these issues to significantly improve the accuracy of measured speed (or pace), especially in urban environments with lots of tall buildings, van Diggelen said. Broadcom also uses whats called Long-Term Orbit technology that enables the device to synch up with satellites faster. Our GPS knows where the satellites are the instant it starts up, and this not only helps accuracy but solves the long time-to-fix problem, he said.No more holding your watch up to the sky waiting for a satellite fix. Theres also the high power consumption of native GPS technologies, and the apps that utilize them.Devices such as smartwatches need to run on small batteries for a long time without being charged. Broadcom is leveraging its 15 years of experience with global positioning technologies to help lower the power requirements without compromising accuracy. This enables manufacturers of smartwatches, wristbands and other wearable gadgets to get more functionality with a smaller power footprint. That means</description>
      </item>
      <item>
         <title>Greg Fischer in ECN Magazine: &quot;The Answer to the Internet of Things Spectrum Crunch&quot;</title>
         <link>https://www.broadcom.com/blog/greg-fischer-in-ecn-magazine---the-answer-to-the-internet-of-things-spectrum-crunch-</link>
         <guid>https://www.broadcom.com/blog/greg-fischer-in-ecn-magazine---the-answer-to-the-internet-of-things-spectrum-crunch-</guid>
         <pubDate>June 13, 2014</pubDate>
         <description>Editor's Note: Broadcom experts often weigh in on popular topics on industry sites around the Web. Below is a reprint of a story that appeared in ECN Magazine, in which Greg Fischer, Senior Vice President and General Manager, Broadband Carrier Access at Broadcom talks about the role small cells will play in how wireless networks enable the &quot;Internet of Things&quot;. From ECN Magazine: Recently, the technology industry has been abuzz with talk about the Internet of Things (IoT), a term used to describe an ecosystem of consumer electronics and appliances that have the ability to connect to the network and communicate with one another. Many pundits and even technology-makers themselves are preparing for major changes to information and communications infrastructure as literally billions of IoT devices become connected. In fact, IDC analysts predict that by the end of 2020 alone, there will be an installed base of 212 billion things connected, 30.1 billion of which will be connected things. While those forecasts are impressive, the optimism for market growth stemming from the positive impact IoT devices is also tempered with concern over how the massive amounts of data collected by these devices will be managed across the network. Moving forward, that concern will only intensify, and with good reason. While today more than half of all mobile traffic is handled by the macro network (Infonetics, Jan, 2013), over time that traffic will be more distributed over every type of small cell, representing a significant market opportunity.[/caption] The IoT era will bring the realization of scenarios like vehicles talking to embedded monitors positioned along roadways to better route traffic, and home appliances connecting to the smart grid to improve their efficiency and reliability. But thats only part of the story. Those connected devices will further burden our already strained networks, augmenting</description>
      </item>
      <item>
         <title>Word on the Street: Media roundup for Broadcom’s StrataDNX™ chipset line</title>
         <link>https://www.broadcom.com/blog/word-on-the-street-media-roundup-for-stratadnx-chip</link>
         <guid>https://www.broadcom.com/blog/word-on-the-street-media-roundup-for-stratadnx-chip</guid>
         <pubDate>February 23, 2017</pubDate>
         <description>What writers and editors from the industry press are saying about Broadcom’s StrataDNX switching portfolio From Converge Digest: “Aiming for new high-volume markets and appliances at the edge of the carrier network, Broadcom announced two new product families in its StrataDNX™ switch portfolio. The new devices range from 30Gbps to 300Gbps and offer similar capabilities and programming models as Broadcom's flagship devices. “The newly announced StrataDNX switch family includes the following device options: BCM88476 - A 300 Gbps packet processor and traffic manager including StrataDNX fabric interfaces compatible with BCM88750 and BCM88770 Fabric Elements, plus external packet buffer and table expansion. (“Kalia”) BCM88470 - A single-chip packet processor and traffic manager delivering 160 to 300 Gbps of Ethernet capacity with external packet buffer and table expansion. (“Qumran-AX”) BCM88270 - A small footprint single-chip packet processor and traffic manager delivering 30 to 120 Gbps of Ethernet capacity with external packet buffer expansion. (“Qumran-uX”) From Charlie Demerjian at SemiAccurate: “The idea for the entire StrataDNX line is simple enough, start with the big telco/provider level switches and push the same architecture out to the remote nodes and even edge devices. By edge devices we mean RANs and large cell towers, not home routers and picocells. The markets Broadcom is aiming this line at values consistency in coding, management, and performance so a top to bottom device line is pretty much a must have for salespeople to get returned calls. “Broadcom is keen to point out that the DNX line has deterministic performance at full line rates, you may add latency with more features but it shouldn’t affect throughput. Part of this is from the Hierarchical Traffic Manager, part from the architecture of the system, it was designed from the ground up to be used by telcos and service providers who value deterministic</description>
      </item>
      <item>
         <title>CEA's Shapiro: Consumer Electronics Industry Growing Strong, &quot;Disruptive Innovation Will Lead the Way&quot;</title>
         <link>https://www.broadcom.com/blog/ceas-shapiro-consumer-electronics-industry-growing-strong-disru</link>
         <guid>https://www.broadcom.com/blog/ceas-shapiro-consumer-electronics-industry-growing-strong-disru</guid>
         <pubDate>January 7, 2014</pubDate>
         <description>LAS VEGAS It wouldn't be the Consumer Electronics Show without soothsayer Gary Shapiro kicking off the annual trade show, the largest gathering of its kind, in Las Vegas today. &quot;I love CES,&quot; Shapiro told the crowd of attendees during his opening day keynote session at the Venetian Hotel. Shapiro, CEO of the Consumer Electronics Association (CEA), was predictably upbeat. He reported a forecast of 2.4 percent growth for the U.S.consumer electronics industry, citing the latest semiannual Consumer Electronics Sales and Forecasts Report.The modest uptick means the industry is expected to see more than $200 billion in revenue this year. He drummed up excitement for the thousands of new CE products on display at this year's show, confident that industry growth would be led by &quot;disruptive technology innovations&quot; that would create new product categories. Among those game-changing innovations: Ultra HD television, 3-D printing and microelectromechanical systems (MEMS), which are essentially tiny sensors that will help transmit biometric data to all sorts of wearable devices and help proliferate the Internet of Things. [cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_left&quot;]Among other big trends at the show, automotive looms large, with more of Las Vegas Convention Center real estate dedicated to in-car tech than ever before, including self-driving cars, infotainment and more. He was quick to note one other bit of good news: The Consumer Electronics Association lobbied the Federal Aviation Administration to allow airplane passengers to use their gadgets while flying -- a declaration that earned applause from the many attendees who flew to Las Vegas from around the world. The International Consumer Electronics Show continues through Friday in Las Vegas. See more of our photos from the CES floor Get the latest CES news from Broadcom on our dedicated website.Follow the Blog Squad and join the conversation on Twitter at #connectingeverything, liking us on Facebook and</description>
      </item>
      <item>
         <title>With New Broadcom Tech Along for the Ride, Connectivity in Cars Gets Speedier, More Robust</title>
         <link>https://www.broadcom.com/blog/with-new-broadcom-tech-along-for-the-ride-connectivity-in-cars</link>
         <guid>https://www.broadcom.com/blog/with-new-broadcom-tech-along-for-the-ride-connectivity-in-cars</guid>
         <pubDate>December 20, 2013</pubDate>
         <description>Bluetooth technology in todays new cars is almost a given.Car buyers have an expectation that their smartphones will connect to their in-car audio systems for hands-free chatting and music playback. All 12 of the world's major car manufacturers offer Bluetooth hands-free calling systems in their vehicles.But far fewer of the autos rolling off assembly lines today from 10 to 20 percent come equipped with Wi-Fi technology, in-part because consumers have yet to discover the features that a Wi-Fi connection can unleash.Fewer still employ Bluetooth Smart, a low-power flavor of the ubiquitous, short-range connection that enables drivers to do more with their devices while preserving precious battery power. In five years, some 60 percent of the new cars in the U.S.will have both technologies, according to a recent study by the IEEE.By 2025, the two technologies are expected to be in just about every new car to roll off the assembly line. Awareness for in-car connectivity has seen a slow build over the past few years, but automotive technologies are expected to generate headlines at this years Consumer Electronics Show, where about 125 auto tech companies are expected to cover more than 140,000 square feet of exhibit space at the Las Vegas Convention Center and surrounding hotels.Carmakers are already looking ahead, designing in-car communications hubs that will transform the full automotive (as opposed to just driving) experience.From BMW to Hyundai to Mercedes, automakers see the future: a consumer market that will demand ubiquitous, reliable connectivity for on-the-go information and entertainment.5G WiFi is one of the keys that will unlock these new experiences.[cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_right&quot;] Thilo Koslowski, vice president and analyst at market research firm Gartner predicted in a recent blog post that the automobile will eventually become more innovative and cooler than smartphones and excite drivers and passengers in immersive experiences.Expect</description>
      </item>
      <item>
         <title>Powerline Networks Get Boost with New Devices from Devolo</title>
         <link>https://www.broadcom.com/blog/home-networking/powerline-networks-get-boost-with-new-devices-from-devolo/</link>
         <guid>https://www.broadcom.com/blog/home-networking/powerline-networks-get-boost-with-new-devices-from-devolo/</guid>
         <pubDate>April 25, 2012</pubDate>
         <description>Powerline networking is back, and Broadcom's innovative technology is making it better than ever.Its simple to install and just works- simply plug an adapter into a standard power outlet to boost the home networks performance. Devolo, the European leader in Powerline Communications (PLC), has picked Broadcom's HomePlug AV chip to power its latest line of PLC adapters- giving users an easy way to experience Internet everywhere and connect devices in their homes.Devolos PLC adapters support the growing need for simultaneously connecting multiple devices to a home network.That allows users to connect devices such as game consoles and Blu-ray players without compromising broadcast and IPTV transmissions. But it does more than just that.Broadcom's technology dramatically lowers power consumption both while on and in sleep modes - consumes less than 2.5 watts while on and less than 0.5W in standby. How Does Powerline Work? Powerline adapters use the existing electrical wiring in a home to create an instant network.Literally, users simply plug a tiny device half the size and 30% cheaper than previous versions directly into a power outlet.No new wires needed! And because almost every room, users can bring networking technology into areas of the house that have been previously been dead zones. Powerline Connects the Home Powerline also opens the door to new IP-based services that service providers can offer their customers, cool apps such as lighting control, home automation and energy management.It also supports the delivery of multiple HD broadband streams and IPTV services throughout the home. Broadcom is the only company with a complete portfolio of home networking standards to create a high performance and reliable plug-n-play connected network.Adding HomePlug to the existing MoCA, Wi-Fi and DLNA technologies, Broadcom is enhancing the TV and broadband experiences by delivering video, Internet and TV to any screen. Related Reading: Review:</description>
      </item>
      <item>
         <title>Broadcom's Community Wi-Fi Named Finalist for Best New Cable Service</title>
         <link>https://www.broadcom.com/blog/broadcoms-community-wi-fi-named-finalist-for-best-new-cable-ser</link>
         <guid>https://www.broadcom.com/blog/broadcoms-community-wi-fi-named-finalist-for-best-new-cable-ser</guid>
         <pubDate>October 8, 2012</pubDate>
         <description>The spotlight is on Broadcom's Community Wi-Fi software service as the newest go-to technology for cable subscribers, according to Light Reading, a top source for news for the communications industry.

Community Wi-Fi was recently named as a finalist in theBest New Cable Service or Application category by for Light Reading's Leading Lights awards.A run down on the technology: Broadcom's software, coupled with a DOCSIS 3.0-enabled cable set-top box or media gateway allows cable operators to offer on the go Wi-Fi hot spots  dubbed Community Wi-Fi, as a new service to their subscription customers.

Such Community Wi-Fi hot spots are already being put in action in the U.S.and Europe.The Netherlands biggest cable operator, Ziggo, launched a trial  as a test for a wider implementation.

Light Reading's &quot;Best New Cable Service or Application&quot; will be awarded to &quot;the company that has developed a potentially market-leading product,&quot; according to their website.

The software technology was chosen among a competitive group that included Comcasts X1 platform and Liberty Globals Horizon TV. Winners are set to be announced at an awards dinner  at New York's Manhattan Penthouse on Nov.7 at the Ethernet Expo Americas trade show.

See the full list of all categories and nominees.

Related:

	Cable Connections Power Community Wi-Fi at IBC
	Light Reading: Broadcom's Next D3 Chip Will Leapfrog Intel
	Broadcom, Rovi Open Doors for Enhanced Entertainment at IBC [Video]
	IPTV Revolution in Your Living Room: Broadcom at IBC Amsterdam
</description>
      </item>
      <item>
         <title>StrataXGS Trident II Design Team Awarded by Electronic Products</title>
         <link>https://www.broadcom.com/blog/strataxgs-trident-ii-design-team-awarded-by-electronics-products</link>
         <guid>https://www.broadcom.com/blog/strataxgs-trident-ii-design-team-awarded-by-electronics-products</guid>
         <pubDate>February 7, 2013</pubDate>
         <description>[caption id=&quot;attachment_7178&quot; align=&quot;alignleft&quot; width=&quot;99&quot;] The StrataXGS Trident II series is one of Electronic Products magazines 2012 Products of the Year.[/caption]

It's not every day that Broadcom's hardworking engineers get recognized for their innovations, especially when their work deals in the particulars of managing the complex back-end of cloud networks.This week, the team that designed the StrataXGS Trident II was lauded after the switch series was named one of the 2012 Products of the Year by Electronic Products magazine.

The award is an important triumph in the teams collective careers, said Jim Harrison, the magazines West Coast editor, as well as a milestone for Broadcom, during a presentation at Broadcom's San Jose campus this week.The StrataXGS Trident II series was released last summer and addresses critical issues of cloud-scale networking.

The magazine chooses 17 winning products for the annual awards from among the thousands that hit the market each year.The winners represent significant advancement and innovative design in their respective categories, according to the magazine.

The members of the StrataXGS Trident II design team on hand to receive the award were Avinash Mani, Mohammad Issa, Nick Kucharewski, Hsin-Yuan Ho, Venkatesh Buduma and Mike Jorda.The team has worked together for about 12 years, said Kucharewski, a Senior Director of Product Marketing in the Infrastructure and Networking Group.

[caption id=&quot;attachment_7197&quot; align=&quot;aligncenter&quot; width=&quot;500&quot;] Members of Broadcom's StrataXGS Trident II design team were on hand to receive the award, from left: Avinash Mani, Mohammad Issa, Nick Kucharewski, Hsin-Yuan Ho, Venkatesh Buduma and Mike Jorda.[/caption]

Related: 

	[Press Release]: Broadcom StrataXGS Trident II Awarded Product of the Year
	The Flexible Cloud: Smart-Table Technology Enables Network Scalability
	Broadcom's 28nm Technology: Greater performance and less power consumption is only the beginning
	Virtualization Meets the Cloud at VMworld
	Broadcom Tackles Cloud Control at VMworld
</description>
      </item>
      <item>
         <title>Inside OpenNSL: Open Source Innovation in the Network Has Arrived</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/inside-opennsl-open-source-innovation-in-the-network-has-arrived/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/inside-opennsl-open-source-innovation-in-the-network-has-arrived/</guid>
         <pubDate>March 10, 2015</pubDate>
         <description>As the Internet has evolved with more users performing more tasks with more devices data centers are pushed to their limits to keep up with the demands.At the same time, the progress in open source applications and software can be leveraged in data center networking to create a new generation of open, flexible and customizable solutions that promise to relieve some of the bottlenecks. Broadcom is bringing those two worlds together with its recent release of an open-source software system for its industry-leading switching silicon that will allow customization of data center operations. The open-source solution, called OpenNSL, or Open Network Switch Library, isnt the same sort of open-source environment that allows, for example, a home hobbyist to unlock a smartphone.Instead, OpenNSL provides data center operators with the freedom to control their equipment so that it operates in a more efficient, cost-effective way that meets their specific needs. Through OpenNSL, network administrators can better manage workloads and network traffic, as well as share designs that could boost hardware innovation in the future. OpenNSL is being introduced in conjunction with the Open Compute Project Summit, a two-day conference this week in San Jose, Calif., that gathers together companies in the open source hardware and software communities.The conference host is the Open Compute Project, a coalition of data center architects such as Broadcom, Facebook, Microsoft, Dell and other companies that have a common goal of creating tools and publishing standards to enable collaborative development around network architectures so that theyre more flexible and easier to scale. Download the APIs and documentation on Broadcom's GitHub site. Facebook, which launched the Open Compute Project with Broadcom and other technology companies in 2011, is an example of the type of company that could benefit from a data center that has more flexibility.The social networking giants</description>
      </item>
      <item>
         <title>NetXtreme C-Series Eases Transition to 25/50G Ethernet in Cloud Scale Networks</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/netxtreme-c-series-eases-transition-to-2550g-ethernet-in-cloud-scale-networks/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/netxtreme-c-series-eases-transition-to-2550g-ethernet-in-cloud-scale-networks/</guid>
         <pubDate>July 27, 2015</pubDate>
         <description>Anyone whos ever wished they could clone themselves to become more productive can understand the appeal of network and server virtualization. For the past decade, data center architects have leaned on virtualization, which enables a computer server to behave as if it had dozens of clones of itself to increase efficiency and save costs.But this efficiency and cost savings has also increased demands on networks, which have grown rapidly in recent years. With networks creaking under the weight of ever-increasing data traffic, the problem will only become worse.While virtual machines (or VMs) can take some of the otherwise idle computing power of a server and divide up the workload more efficiently, there are diminishing returns on adding more of them.Market research firm Infonetics predicts that the number of VMs that cloud-scale network operators will stuff into the average server will continue to grow over the next two years to 98 VMs per server. This increases the demands on networks as each VM clamors for network services. The mega-scale data center operators are already grappling with this problem.For many of them, virtualization cant happen fast enough.With increasing numbers of both physical and virtual servers to manage, IT departments have the potential to maximize their equipment while reducing capital and operational expenses.But what about scaling the network? Ethernet Controller Architecture Advances This week, Broadcom unveiled the NetXtreme C-Series, a new Ethernet controller product family that will give data center architects the tools they need to support higher VM density in their servers.Its a first-in-the-industry offering designed specifically for the up-and-coming Ethernet link-speeds of 25 or 50 Gigabits per second (Gbps) in the smallest package, while also minimizing power.The NetXtreme C-Series also supports older link speeds of 10 and 40 Gbps. New products in the series include BCM57301, BCM57302, BCM57304, P150c, P225c, and</description>
      </item>
      <item>
         <title>Network Know-How: Broadcom Expands Switch API Library</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/network-know-how-broadcom-expands-switch-api-library/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/network-know-how-broadcom-expands-switch-api-library/</guid>
         <pubDate>October 5, 2015</pubDate>
         <description>Network administrators and IT experts are in the midst of a sea change in their industry. The disaggregation of the network has kicked up a lot of interest around the software defined networking (SDN) trend, which enables the companies that store and serve up massive amounts of data to use software and analytics to better manage their networks. SDN is still taking shape in the market, but in the meantime, its enabling the professionals who are charged with managing and monitoring megascale data centers to find more technology to improve operational transparency. Ultimately, the goal is to lighten their loads by offering up tools that enable a high-level, mission control experience for managing and monitoring network traffic. Broadcom, as a leading player in data center switches, is intimately familiar with the questions that network administrators face when wading into SDN technologies.Not only do they need to justify their hardware investments and ensure that they can scale, they must also figure out how to wrangle increasingly complex networks that cant see any downtime. Broadcom earlier this year announced OpenNSL, a software platform that enables the programming of Broadcom network switch silicon-based platforms. [cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_right&quot;] Today, Broadcom doubled the number of APIs openly available for download in its Open Network Switch Library (OpenNSL) and expanded the potential use cases for network administrators to test. OpenNSL also enables the development of new applications on top of Broadcom market-leading StrataXGS switches, giving customers the flexibility to tailor their network configuration and meet their unique infrastructure requirements. On its Github site, Broadcom includes downloadable API documentation mapping out its OpenNSL software, a developer's guide and sample code for building software and hardware integrations. Because OpenNSL is open source, customers can develop an innovative, custom product, said Ashok Raman, senior product line manager, OpenNSL, at</description>
      </item>
      <item>
         <title>Meet the first router using MaxWiFi</title>
         <link>https://www.broadcom.com/blog/meet-the-first-router-using-maxwifi</link>
         <guid>https://www.broadcom.com/blog/meet-the-first-router-using-maxwifi</guid>
         <pubDate>September 27, 2017</pubDate>
         <description>This article first appeared at MaxWiFi.org

 

ASUS Vice President Eric Chen recently took the stage at The Edge of Beyond press event at IFA 2017 in Berlin to unveil several exciting new products, including one very special router, ASUS RT-AX88U. The ASUS RT-AX88U router isn’t just the first-ever ASUS router designed to use 802.11ax Max WiFi technology—it’s the world’s first Max WiFi router product to be announced.

Max WiFi helps the ASUS RT-AX88U router achieve “unprecedented combined speeds of up to 5952Mbps—1148Mbps on the 2.4GHz band and 4804Mbps on the 5GHz band,” according to an ASUS press release. Not only does Max WiFi make the ASUS RT-AX88U router impressively fast, it also makes it significantly more efficient. Thanks to Max WiFi, the press release goes on to explain, “ASUS RT-AX88U supports more simultaneous data transfers than 802.11ac MU-MIMO routers, allowing more devices to have faster network access at the same time without waiting for each other—perfect for homes with a growing number of smart, connected devices.”

Claus Hetting from Wi-Fi Now notes that while “ASUS—and soon many others—will market 802.11ax [Max WiFi technology] based on speed, this new Wi-Fi technology is really more about quality: doing away with contention and introducing OFDMA will serve up a vastly improved Wi-Fi user experience in the home and everywhere else.”

Indeed, the ASUS RT-AX88U router with Max WiFi is surely the first of many products to take advantage of this new best-in-class Wi-Fi standard. To learn more about Max WiFi and about Broadcom’s new ecosystem of Max WiFi chips, visit maxwifi.org.
</description>
      </item>
      <item>
         <title>CCBN Video: Broadcom Technology Powers the Connected Home</title>
         <link>https://www.broadcom.com/blog/home-networking/ccbn-video-broadcom-technology-powers-the-connected-home/</link>
         <guid>https://www.broadcom.com/blog/home-networking/ccbn-video-broadcom-technology-powers-the-connected-home/</guid>
         <pubDate>March 21, 2012</pubDate>
         <description>From the Broadcom booth at the China Cable Broadcasting Network show in Beijing, Executive VP Dan Marotta offers a two-minute demo of the Connected Home technology on display at the show.



The video is also available in Mandarin on our CCBN page or our YouTube Channel.

Related Posts:

	Pay-TV in China Reaches New Heights with Broadcom Technology
	Broadcom at CCBN: The China TV Blitz Begins
	Digital TV Goes Global

 

 </description>
      </item>
      <item>
         <title>How Cars are Connecting to the App Economy</title>
         <link>https://www.broadcom.com/blog/ces/how-cars-are-connecting-to-the-app-economy/</link>
         <guid>https://www.broadcom.com/blog/ces/how-cars-are-connecting-to-the-app-economy/</guid>
         <pubDate>December 20, 2013</pubDate>
         <description>One of the biggest challenges around the ramp-up of new in-car technologies is that early adopters tend to upgrade their devices more often than they buy new cars. But what if in-car tech was flexible enough to integrate the latest in connectivity into older models? Just as smartphones today can be updated with the latest software, so too could a cars built-in infotainment system.By 2018, automakers will ship more than 35 million vehicles that contain infotainment systems that link directly to a smartphone, according to ABI Research.A platform called MirrorLink, backed by a Connected Car Consortium (CCC) composed of more than 80 percent of the worlds automakers, is looking to mirror the smartphone experience to an in-dash display. MirrorLink can create great experiences for drivers and passengers, who bring along a smartphone or tablet equipped with the latest apps for real-time traffic updates, streaming music channels and even biometric trackers that read a drivers fatigue or stress levels. Carmakers face the difficult challenges of not only how best to integrate smartphones into their vehicles, but also how to ensure that the integration strategy remains viable throughout the life of the vehicle and multiple generations of smartphones,&quot; Gareth Owen, principal analyst at ABI Research, said in a statement. Unlike other in-home screen-sharing technologies such as Miracast, the MirrorLink standard also defines the back-channel controls and runs only CCC-approved apps.That makes it easy to view and control phone capabilities like GPS, cellular modems, music and video players, as well as sensors from the cars own interfaces, and it adds the smart protocols that limit which apps can work when the car is in motion. Imagine, for example, a smart parking app that not only helps drivers find the closest, cheapest open spot but also records the location and provides walking directions between</description>
      </item>
      <item>
         <title>Up Close With Wireless Charging at CES</title>
         <link>https://www.broadcom.com/blog/ces/up-close-with-wireless-charging-at-ces/</link>
         <guid>https://www.broadcom.com/blog/ces/up-close-with-wireless-charging-at-ces/</guid>
         <pubDate>January 8, 2014</pubDate>
         <description>LAS VEGAS The thousands of shiny new devices on display at this week's Consumer Electronics Show come with a tremendous promise -- to enhance consumers' lives. Some of these devices can create new ways for consumers to interact with other people, while others aim to provide elegant solutions to everyday problems. Wireless charging falls into that second camp. [caption id=&quot;attachment_10879&quot; align=&quot;alignleft&quot; width=&quot;270&quot;] Rezence wireless charging pad is concealed within a high-tech sofa.[/caption] The Alliance for Wireless Power (A4WP), an industry trade group, was on the show floor at the Las Vegas Convention Center demonstrating real-world applications for wireless charging. Broadcom's Reinier van der Lee, director of product marketing, platforms in the Mobile &amp; Wireless Group, talked up the many benefits of Rezence, the A4WP's brand name for a wireless charging standard that's based on magnetic resonance technology. The biggest win for Rezence, according to Van der Lee, is that it allows consumers to simultaneously charge multiple devices on a single pad, eliminating the need for a slew of different charging cables. Rezence has drawn interest from industry watchers and consumers alike as the A4WP promotes its vision for mainstream adoption of wireless charging for smartphones, laptops, wearable devices and gadgets of all stripes. [cf-shortcode plugin=&quot;generic&quot; field=&quot;brcm_links_right&quot;] The key strength of Rezence is its flexibility, enabling charging even if there's interference from other metal objects or if the device being charged isn't perfectly aligned.These features allow for a &quot;drop and go&quot; charging experience for consumers, potentially doing away with all those power cords. [caption id=&quot;attachment_10893&quot; align=&quot;alignleft&quot; width=&quot;270&quot;] Brad Miller, a member of the board for the Alliance for Wireless Power, shows a demo of wireless charging in a car console.[/caption] This flexibility allows the charging pads to be embedded into almost any location, such as a piece of furniture or a</description>
      </item>
      <item>
         <title>IEEE Consumer Electronics Magazine: Broadcom's Stephen Palm on the future of the connected home</title>
         <link>https://www.broadcom.com/blog/ieee-consumer-electronics-magazine-broadcoms-stephen-palm-on-th</link>
         <guid>https://www.broadcom.com/blog/ieee-consumer-electronics-magazine-broadcoms-stephen-palm-on-th</guid>
         <pubDate>July 5, 2012</pubDate>
         <description>Connectivity starts at home. From the first TV set-top box chips to the creation of the DOCSIS cable architecture standard, Broadcom has shaped the connected home over its 21-year history. But what will the future of home connectivity look like? Stephen Palm, Broadcom Senior Technical Director, Broadband Communications Group, looks back at more than two decades of home networking and considers how the next generation of technologies need to evolve in this article featured in the June issue of IEEE Consumer Electronics Magazine, a semiannual journal published by the Institute of Electrical and Electronics Engineers' Consumer Electronics Society. Read Stephen Palm's &quot;Home Networks: From Bits to Gigabits&quot; (Paid Registration Required) A few key takeaways from the article: No single home networking technology has emerged as the clear winner: Homes today use a combination of technologies including MoCA, Wi-Fi, Ethernet, and HomePlug.With hundreds of millions of devices already deployed in todays networks, tomorrows successful networks will build upon these technologies to meet future demands with MoCA 2.0, 5G WiFi (802.11ac), HomePlug AV 2.0. Layering IEEE 1905 on those multiple technologies will allow easier installation and maintenance through topology discovery, metrics, and unified security and allow devices to select the most appropriate path. Home networking technologies will become an increasingly important component of delivering satellite, cable, telecom and over-the-top content, including premium movies, sports and Internet-based services. Gateways will emerge as a media hub in the home that streams content over the home network directly to devices such TVs and game consoles and will support the proliferation of mobile devices such as smart phones and tablets. Broadcom is the only company to offer a complete portfolio of home networking standards to create a high performance and reliable plug-n-play connected home network. About Stephen Palm Dr.Stephen Palm received his bachelor's degree in electrical</description>
      </item>
      <item>
         <title>Broadcom's Dr. Nambirajan Seshadri Named Member of the National Academy of Engineering</title>
         <link>https://www.broadcom.com/blog/broadcoms-dr-nambirajan-seshadri-named-member-of-the-national-a</link>
         <guid>https://www.broadcom.com/blog/broadcoms-dr-nambirajan-seshadri-named-member-of-the-national-a</guid>
         <pubDate>February 10, 2012</pubDate>
         <description>Nambirajan Seshadri, senior vice president and general manager, Mobile Wireless Group, and chief technology officer, mobile platforms and wireless connectivity, was recently elected as a new member of the National Academy of Engineering (NAE), an honor considered among the highest of professional engineering distinctions. NAE members are nominated and elected by their peers. Membership is awarded based on important and significant contributions to engineering theory and practice, as well as unusual accomplishments in the pioneering of new fields of technology. Seshadri was picked for his contributions to wireless communications theory and the development of mass market wireless technology. Election to the NAE is among the highest recognitions that an engineer can achieve, Broadcom Co-founder and Chief Technical Officer Henry Samueli said. The process is extraordinarily competitive so this is quite an achievement. Broadcom now counts four NAE members among its engineering ranks, including Samueli, Nick Alexopoulos, vice president, antenna and RF research and university relations in the office of the CTO, and Arogyaswami Paulraj, senior technical advisor in the office of the CTO. Seshadri is among 66 other newly elected members and 10 foreign associates named this year, bringing the total U.S. membership to 2,254 and the number of foreign associates to 206, according to the NAEs website. Dr. Nambi Seshadri joined Broadcom in 1999. He was the first employee dedicated to developing the company's wireless strategy which initially began with wireless connectivity products and subsequently entered the cellular baseband market. Both segments have evolved into stand-alone business groups. As CTO of the Mobile Platforms and Wireless Connectivity business groups, he helped drive Broadcom's entry into 2G and 3G cellular, mobile multimedia, low power Wi-Fi for handsets, combo chips that integrate multiple wireless connectivity technologies, GPS, 4G technologies, as well as development of a strong IPR portfolio. Since 2011, he</description>
      </item>
      <item>
         <title>Kudos for 4K Joey: Broadcom Processors Inside DISH's Award-Winning Ultra HD Set-Top Box</title>
         <link>https://www.broadcom.com/blog/ces/kudos-for-4k-joey-broadcom-processors-inside-dishs-award-winning-ultra-hd-set-top-box/</link>
         <guid>https://www.broadcom.com/blog/ces/kudos-for-4k-joey-broadcom-processors-inside-dishs-award-winning-ultra-hd-set-top-box/</guid>
         <pubDate>January 14, 2015</pubDate>
         <description>If theres one clear trend to emerge from the International Consumer Electronics Show last week, its that 4K television is well on its way to the mainstream. Ultra HD television sets are coming down in price and the industry is seeing an uptick in 4K content production, including efforts by Sony (in partnership with Netflix), Amazon, YouTube and satellite operator DirecTV. [caption id=&quot;attachment_14297&quot; align=&quot;alignright&quot; width=&quot;300&quot;] DISH Joey 4K[/caption] Broadcom is collaborating behind the scenes with industry leaders, including DISH, on the missing piece: How to get all of that pixel-packed video into consumers living rooms via set-top boxes. One of the ways leading pay-TV providers are doing that is with a standard called High Efficiency Video Codec (HEVC), a video compression technology that cuts the required bandwidth of 4k streams in half. At CES, DISH unveiled the 4K Joey, powered by the Broadcom BCM7448 SoC processor.It plays back 4K video at 60 frames per second with 10-bit color and works with any 4K television that supports HDMI 2.0 and HDCP 2.2.The 4K Joey is a companion set-top box that works with DISHs Hopper.The Hopper acts as a media gateway, and the Joey is a streaming box for secondary TVs in other parts of the house. DISH took home a CES Editors Choice Award for the 4k Joey, which is billed as the industrys first pay-TV provider Ultra HD set-top box . The competition paired up editors from Reviewed.com with the Consumer Electronics Association the industry trade group that runs the yearly CES event in Las Vegas.The Reviewed.com Editors Choice Awards seek to recognize debut products that were &quot;particularly innovative, or striking in their technology, design, or value across a dozen different categories, including automotive, cameras, gaming, health, home theater and televisions, among others. Reviewed.com editors had high praise: The Dish</description>
      </item>
      <item>
         <title>Broadcom to Acquire NetLogic Microsystems, Inc., a Leader in Network Communications Processors</title>
         <link>https://www.broadcom.com/blog/broadcom-to-acquire-netlogic-microsystems-inc-a-leader-in-netwo</link>
         <guid>https://www.broadcom.com/blog/broadcom-to-acquire-netlogic-microsystems-inc-a-leader-in-netwo</guid>
         <pubDate>September 13, 2011</pubDate>
         <description>Combination to Deliver Seamless End-to-End Network Infrastructure Platforms Broadcom Corporation, a global innovation leader in semiconductor solutions for wired and wireless communications, and NetLogic Microsystems, Inc., a leader in high performance intelligent semiconductor solutions for next generation networks, have entered into a definitive merger agreement. Under the agreement, NetLogic Microsystems shareholders will receive $50 per share in a transaction of approximately $3.7 billion, net of cash assumed. The acquisition meaningfully extends Broadcom's infrastructure portfolio with a number of critical new product lines and technologies, including knowledge-based processors, multi-core embedded processors, and digital front-end processors, each of which offers industry-leading performance and capabilities. The combination enables Broadcom to deliver best-in-class, seamlessly-integrated network infrastructure platforms to its customers, reducing both their time-to-market and their development costs. The transaction has been approved by the Broadcom and NetLogic Microsystems boards of directors and is subject to customary closing conditions, including the receipt of domestic and foreign regulatory clearances and the approval of NetLogic Microsystems' stockholders. The transaction is expected to close in the first half of 2012. Broadcom currently expects the acquisition to be accretive to earnings per share by approximately $0.10 on a non-GAAP basis in 2012. &quot;This transaction delivers on all fronts for Broadcom's shareholders -- strategic fit, leading-edge technology and significant financial upside,&quot; said Scott McGregor, Broadcom's President and CEO. &quot;With NetLogic Microsystems, Broadcom is acquiring a leading multi-core embedded processor solution, market leading knowledge-based processors, and unique digital front-end technology for wireless base stations that are key enablers for the next generation infrastructure build-out.Broadcom is now better positioned to meet growing customer demand for integrated, end-to-end communications and processing platforms for network infrastructure.&quot; Mr. McGregor added, &quot;Today's transaction is consistent with Broadcom's strategic portfolio review process and with our focus on value creation through disciplined capital allocation while delivering best-in-class</description>
      </item>
      <item>
         <title>Today at VMworld: Broadcom Unveils New Innovation for Cloud</title>
         <link>https://www.broadcom.com/blog/today-at-vmworld-broadcom-unveils-new-innovation-for-cloud</link>
         <guid>https://www.broadcom.com/blog/today-at-vmworld-broadcom-unveils-new-innovation-for-cloud</guid>
         <pubDate>August 27, 2012</pubDate>
         <description>Broadcom opened VMworld today with a bang, revealing our latest innovation for cloud-scale networking the StrataXGS Trident II Switch Series.Whether youre a technophile, a mild geek enthusiast or simply a smartphone or tablet user, this product will have an impact on how quickly and efficiently you can download data. The number of network connections is growing at an astounding pace.Ciscos latest VNI forecast predicts the number of connected devices to reach 50 Billion by 2020 thats six devices for every person on Earth.What many of us outside the cloistered world of IT and network managers dont think about is the resulting stress that our intense and frequent access to high-bandwidth content and apps places on the data center network infrastructure. Thankfully, the innovative masterminds working behind the scenes at Broadcom have been developing a higher scale and more efficient way to prepare the network infrastructure so that cloud-based services can seamlessly reach the content-hungry masses. Based on Broadcom's award winning StrataXGS architecture, the new Trident II series is the first to deliver more than 100 10GbE ports, a 4X increase in network virtualization scale and a 2X increase in forwarding and classification tables.Touting the worlds highest bandwidth and port density, the new series enables cost-effective and high performance data center build-out to an unprecedented number of server and storage endpoints, applications and users, enabling a new era in cloud-scale and software defined networking. Well continue the conversation about network virtualization at VMworld in San Francisco this week, where I will present together with other industry experts including VMwares Technical Director T.Sridhar and Solutions Architect Ravindra Neelakant. If youre attending the show, come by our session on Wednesday, August 29 and join our discussion.And, of course, youre welcome to swing by our booth - #2107 - to meet our team and</description>
      </item>
      <item>
         <title>Broadcom's 28nm Technology: Greater performance and less power consumption is only the beginning</title>
         <link>https://www.broadcom.com/blog/broadcom-s-28nm-technology-greater-performance-and-less-power-consumption-is-only-the-beginning</link>
         <guid>https://www.broadcom.com/blog/broadcom-s-28nm-technology-greater-performance-and-less-power-consumption-is-only-the-beginning</guid>
         <pubDate>October 1, 2012</pubDate>
         <description>When Broadcom unveils innovative new technologies, such as todays introduction of the worlds first 28-nanometer multicore communications processor, its easy to focus on the major benefits.Consider that the technology performs up to 400 percent faster but consumes up to 60 percent less power and is optimized for service providers, enterprise data centers and cloud computing, as well as software defined networking environments. Those are all great talking points but Broadcom's new XLP 200-Series is about so much more.For the company itself, the announcement marks Broadcom's successful integration of NetLogic Microsystems technologies while expanding the addressable market within the $3 billion communications processor market. More importantly, for end-users the network administrators and IT experts the technology that Broadcom now offers zeroes in on a subject thats been top of mind lately: Security.Protecting the network is always mission critical, but in recent days, the subject has grabbed headlines as cloud providers, social networking sites and retail banks struggle to fend off malicious cyber-attacks on their websites. The XLP 200-Series is the first multicore communications processor that includes on-chip security features that gives network managers the power to thoroughly inspect, encrypt, authenticate and secure Internet traffic at wire speeds. This translates into the ability to better protect enterprise, data center and cloud networks from malware and intrusion threats at the packet level. Key integrated security features include: a grammar processing engine that parses through data packets by fields, protocols or positions and assigns each parsed content to the appropriate database. a fourth generation regular expression (RegEx) search engine, which searches packet content against a large database of security threats. a broad range of autonomous encryption and authentication processing engines to deliver comprehensive Layer 7 deep-packet inspection (DPI) capabilities. complete offload of the compute-intensive security functions from the CPU cores. While the technology may</description>
      </item>
      <item>
         <title>Keynote Recap: Exploring the Power and Potential of Next-Gen Networking at Interop</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/keynote-recap-exploring-the-power-and-potential-of-next-gen-networking-at-interop/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/keynote-recap-exploring-the-power-and-potential-of-next-gen-networking-at-interop/</guid>
         <pubDate>May 8, 2013</pubDate>
         <description>To get a sense of the excitement, anticipation and even dare we say it, anxiety around the fast-approaching Software Defined Networking trend, look no further than this weeks Interop conference in Las Vegas. The SDN discussion is everywhere in the workshops, on the panels, at the show floor booths and, earlier today, the main keynote stage. [caption id=&quot;attachment_8843&quot; align=&quot;alignright&quot; width=&quot;300&quot;] From left to right: Broadcom's Rajiv Ramaswami, Microsoft's Rajeev Nagar, VMware's Martin Casado and moderator Eric Hanselman.[/caption] Rajiv Ramaswami, executive vice president and general manager of Infrastructure &amp; Networking Group at Broadcom, was joined on Wednesday morning by executives from industry heavyweights VMWare and Microsoft to elevate the discussions and address some concerns about SDN.(See the keynote panelists' bios here.) Previously: On Deck at Interop 2013: Simplifying With SDN Right out of the gate, the panelists tried to quash concerns that SDNs arrival would be some sort of Armageddon that completely changes the role of the network.In fact, the panelists said, the coming of SDN is closely tied to a transition thats already happening at enterprises across the world, with trends like network virtualization, cloud computing and BYOD already ushering in the next generation of networking. As the technology and deployments evolve, companies are opening their doors (and their IT budgets) to unconventional network architectures that are increasingly automated and compatible with third-party applications.Overall, the network is still a small part of the larger data center budget.But if the network doesnt work right, then a lot of time and money are being wasted.You dont want the network to get in the way, Ramaswami told the audience of about 1,500 show attendees. Learn More: SDN: A Sea Change in the Data Center. The question of whether advancements in automation and virtualization eliminate the need for visibility into the physical network is</description>
      </item>
      <item>
         <title>Broadcom and Other Tech Leaders Form IoT Consortium to Accelerate Interoperability for Connected Devices</title>
         <link>https://www.broadcom.com/blog/wireless-technology/broadcom-and-other-tech-leaders-form-iot-consortium-to-accelerate-interoperability-for-connected-devices/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/broadcom-and-other-tech-leaders-form-iot-consortium-to-accelerate-interoperability-for-connected-devices/</guid>
         <pubDate>July 8, 2014</pubDate>
         <description>The tidal wave of Internet-connected gadgets everything from fitness-monitoring wristbands to thermostats to door locks makes clear the imperative of interoperability. The central questions among the embedded technology companies that make these devices tick: How do the expected 212 billion Internet of Things (IoT) devices get connected to the Internet? And how do they talk to the device (smartphone or tablet) thats managing them? Interoperability will be a critical enabler as the IoT ecosystem continues to evolve, said Rahul Patel, Broadcom Senior Vice President &amp; General Manager, Wireless Connectivity. Today, Broadcom is announcing that it has joined forces with other technology leaders to form a consortium that aims to deliver a standards-based specification, an open source implementation, and a certification program for wirelessly connecting devices. [caption id=&quot;attachment_12806&quot; align=&quot;aligncenter&quot; width=&quot;479&quot;] Photo courtesy of the Open Interconnect Consortium[/caption] The ultimate goal of the Open Interconnect Consortium which also includes Atmel Corporation, Broadcom Corporation, Dell, Intel Corporation, Samsung Electronics Co., Ltd., and Wind River is to accelerate the development of the Internet of Things by defining a common communications framework based on industry-standard technologies.To fully realize the vision of IoT, devices should be able to discover, connect and interoperate regardless of who makes them. The group plans to tap both existing standards, such as Wi-Fi, Bluetooth, and more, while leaving the door open for emerging standards that could fit the massive scalability needs of the IoT ecosystem. Through our collaboration with other industry leaders in establishing an open IoT platform encompassing multiple connectivity technologies, we are removing the barriers to entry and opening up the opportunity for innovation to a broad range of inspired entrepreneurs, Patel said. In its press release announcing the consortium, the member companies said that the initial open source code will target the specific requirements for smart home and</description>
      </item>
      <item>
         <title>Word on the Street: Media roundup for the Trident 3 Ethernet switch</title>
         <link>https://www.broadcom.com/blog/word-on-the-street-media-roundup-for-the-trident-3-ethernet-switch</link>
         <guid>https://www.broadcom.com/blog/word-on-the-street-media-roundup-for-the-trident-3-ethernet-switch</guid>
         <pubDate>July 25, 2017</pubDate>
         <description>From Rick Merritt at EE Times: &quot;When the Trident 3 ships next year it will power systems with as many as 32 100 Gbit/second Ethernet ports that cost as little as $3,000 and consume less than 400W.&quot; And: &quot;In the face of the rising competition, 'Broadcom is trying to get the message out it will be competitive on price,' said (Bob) Wheeler (The Linley Group). &quot;It also made a point about its backward compatibility with the Trident 2 and support for regression testing. 'That’s a powerful part of being an incumbent versus people launching a first-gen platform,' he added. &quot;Overall, Trident 3 aims to offer new programming capabilities with deterministic performance at a new low in cost and power. 'That’s big,' said Wheeler.&quot; From David Strom at Data Center Knowledge: &quot;The XGS line has been one of the leaders in network virtualization inside data centers, and the new line offers several benefits, including power and cost savings, better programmable support for new software-defined networking technologies, and higher switching throughput and densities. All of these will appeal to large-scale data center customers.&quot; From Bob Wheeler at The Linley Group: &quot;In addition to using 16nm technology, what sets Trident 3 apart from prior generations is its programmable ingress and egress pipelines. Most customers, however, will use Broadcom-supplied images that make Trident 3 backward compatible with Trident II/II+ features and APIs. Programmability enables new capabilities such as in-band network telemetry (INT), service-function chaining (SFC), new tunneling protocols, and sophisticated load-balancing algorithms. The company also increased Trident 3’s control-plane performance through faster interfaces and integrated CPUs for real-time processing.&quot; From Timothy Prickett Morgan at The Next Platform: &quot;The aim with Trident-3 is to provide networking for enterprises (with standalone apps as well as those running on private clouds) that need broader Ethernet protocol support</description>
      </item>
      <item>
         <title>DISH's Super Joey, with Broadcom Satellite Chip Inside, Gets CES Applause</title>
         <link>https://www.broadcom.com/blog/dishs-super-joey-with-broadcom-satellite-chip-inside-gets-ces-a</link>
         <guid>https://www.broadcom.com/blog/dishs-super-joey-with-broadcom-satellite-chip-inside-gets-ces-a</guid>
         <pubDate>January 16, 2014</pubDate>
         <description>Although the International Consumer Electronics Show wrapped up last week, the medias still buzzing about the thousands of new products that debuted at the yearly tech-stravaganza.

Among them was DISH Networks Super Joey, a companion set-top box based on Broadcom's BCM7346 processor, that adds two additional satellite tuners to DISHs popular Hopper.

Last week, the Super Joey took home a CES Editors Choice Award, an inaugural competition by Reviewed.com and the Consumer Electronics Association, the industry group that puts on the annual Las Vegas-based CES.

The awards seek to recognize outstanding innovation, design and value across a dozen different product categories, including automotive, cameras, gaming, health and televisions, among others.DISH's Super Joey was one of only two recipients in the home theater category.

The editors who judged the event had high praise:

Two new Joeys were unveiled at this year's show, and our favorite is the Super Joey.Last year, the Hopper let you record six channels at once; the Super Joey adds two more tuners, letting you record a whopping eight shows at the same time.

The Hopper acts as a media gateway, and the Super Joey is a streaming box for secondary TVs in other parts of the house, letting consumers record eight shows at the same time.

The tech media and other gadget reviewers had good things to day about the Super Joey, including  the Los Angeles Times, Slashgear, CNet and Engadget.

Last year, DISH's Hopper Whole-Home HD DVR was awarded &quot;Best of Show&quot; distinction under the &quot;Best of CES&quot; awards program for the 2013 CES.
Related: From CES 2013: Dish Hopper DVR Powered by Broadcom [Video]</description>
      </item>
      <item>
         <title>Powerline Communications: Standard Outlets Boost Home Networks</title>
         <link>https://www.broadcom.com/blog/powerline-communications-standard-outlets-boost-home-networks</link>
         <guid>https://www.broadcom.com/blog/powerline-communications-standard-outlets-boost-home-networks</guid>
         <pubDate>December 3, 2012</pubDate>
         <description>The original in-home network the series of wires and outlets that carry standard electricity from kitchen to bedroom to living room will soon be taking connectivity to the next level.With a recent standards revamp for a technology known as Powerline Communications, or PLC, the everyday wall outlets that are already powering consumer electronics devices in the home are being tapped to provide access to the Internet, too. Thats an important development for the consumer electronics industry as people rely on their existing home network usually Wi-Fi to do more than just surf Web pages from a laptop.Consumer demand is surging for the ability to do things like stream video to a tablet, upload photos from a mobile phone and engage in real-time, interactive game play. [caption id=&quot;attachment_5572&quot; align=&quot;alignright&quot; width=&quot;172&quot;] D-Link Systems Inc.'s Powerline adapter for the European market, with HomePlug-based Broadcom technology inside.[/caption] A growing number of home entertainment devices from the set-top box and smart TV, to the game console and family room tablet are being outfitted with features and services that require a constant online connection.The deluge of content that comes along for the ride can put an unnecessary burden on Wi-Fi networks, which could eventually compromise the quality of the experience. Through Powerline networking, the standard outlet found on every wall becomes an instant hard-wired port to the Internet for every appliance that plugs in to the wall almost like Ethernet.While Ethernet is faster and more robust than wireless networking, the obstacle to widespread adoption has been the need for special wiring and ports that arent very common in older homes.Powerlines ports, on the other hand, are very common. Now, the technology standard that has been supporting earlier versions of Powerline for more than a decade, called HomePlug, is starting to see some upgrades that make Powerline</description>
      </item>
      <item>
         <title>Broadcom Co-Founder, CTO Henry Samueli to Give ICCE Keynote Jan. 13th</title>
         <link>https://www.broadcom.com/blog/broadcom-co-founder-cto-henry-samueli-to-give-icce-keynote-jan-</link>
         <guid>https://www.broadcom.com/blog/broadcom-co-founder-cto-henry-samueli-to-give-icce-keynote-jan-</guid>
         <pubDate>January 8, 2012</pubDate>
         <description> 
Henry Samueli&quot; width=&quot;300&quot; height=&quot;168&quot; /&gt;On the heels of accepting the Dr.Morris Chang Exemplary Leadership Award from The Global Semiconductor Alliance last month,  Broadcom co-founder and Chief Technical Officer Dr.Henry Samueli is set to deliver the opening keynote at the ICCE just as the worlds largest Consumer Electronics Show (CES) begins to wind down.
At the International Conference on Consumer Electronics (ICCE), Samueli will share how semiconductor technology enables the connected universe and the entire electronics value chain.He's set to illustrate how consumers use technology to stay connected in every aspect of their lives at home, at work and on the go.

He'll also share his thoughts on why nonstop connectivity is whats next and how innovation in the semiconductor industry will lead the way toward Broadcom's mission of connecting everything.

ICCE, presented by the Institute of Electrical and Electronics Engineers (IEEE) Consumer Electronics Society, is where the world's leading engineers and technologists gather to present key technologies, products, services and architectures for consumer entertainment and information delivery.The ICCE marks its 30th anniversary this year.

Registered ICCE attendees can attend the session at 8:00 a.m.on Friday, January 13, at the Las Vegas Convention Center (LVCC) - second floor of North Hall.

Read about Dr.Henry Samueli and learn more about Broadcom.</description>
      </item>
      <item>
         <title>Go Gigabit: More Connected Devices Drives Broadband Demand</title>
         <link>https://www.broadcom.com/blog/ces/go-gigabit-more-connected-devices-drives-broadband-demand/</link>
         <guid>https://www.broadcom.com/blog/ces/go-gigabit-more-connected-devices-drives-broadband-demand/</guid>
         <pubDate>January 8, 2015</pubDate>
         <description>LAS VEGAS Truth be told, the big headlines from this weeks International Consumer Electronics Show likely wont focus on Gigabit speed broadband in the home. As we begin to use the Internet and other services more, the pipe going into the home needs to get bigger and bigger, said Jay Kirchoff, vice president of marketing in the Broadband &amp; Connectivity Group at Broadcom. A couple of years ago you probably had two or three devices in your home that consumed the Internet,&quot; Kirchoff said.&quot;Next year, its probably going to be up to 50 devices in the home that will want Internet access. [caption id=&quot;attachment_14099&quot; align=&quot;alignright&quot; width=&quot;300&quot;] Inside the Broadcom Booth: The new BCM93390 demonstrates download speeds of more than 4 Gbps.[/caption] CES News: Broadcom Unleashes Gigabit Speeds for Consumer Cable Modems Multi-gigabit-per-second networks to the home arent all that common in the United States, outside of cities that are serviced by Google Fiber.Typically, a home connection delivers broadband speeds in the 20 Mbps range with costly higher-tier packages topping out at 505 Mbps. In the past, the options for getting close to one gigabit-per-second network speeds on cable networks at home were few and far between, mainly due to the cost of upgrading the infrastructure. But thats starting to change with the recent rollout of a standard dubbed DOCSIS 3.1, which is set to enable multi-gigabit speeds via cable modems in consumers homes.Broadcom is the first silicon vendor to offer up a cable modem system-on-a-chip thats DOCSIS 3.1-ready. So what does this mean for the everyday cable consumer? Generally, it means more smartphones, tablets and laptops can happily coexist on the same home broadband network without any slowdown in performance, given that network speeds above 1 Gbps can support more devices connected on a single home network with faster, more</description>
      </item>
      <item>
         <title>A Strategic Partnership: Broadcom &amp; NetLogic (Part 2 of 3)</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/a-strategic-partnership-broadcom-netlogic-part-2-of-3/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/a-strategic-partnership-broadcom-netlogic-part-2-of-3/</guid>
         <pubDate>August 10, 2012</pubDate>
         <description>Broadcom acquired NetLogic Microsystems Inc. in May of 2012.The Santa Clara-based company was incorporated into Broadcom's Infrastructure and Networking Group (ING) to provide a more complete solution for mobile infrastructure  including switches, microwave backhaul and more.

In this video, I talk with Broadcom's Rajiv Ramaswami (Executive Vice President, General Manager, ING) and Ron Jankov (formerly NetLogics CEO, now Senior VP &amp; GM of Processors and Wireless Infrastructure in the ING business unit) about the perks of the new partnership, and what makes the acquisition of NetLogicthe largest in Broadcom's history so technologically important.

Continuing their conversation, Ramaswami and Jankov delve deeper into how their integrated systems will work together, why NetLogics processor technology is so vital to the continued success of Broadcom and the future of the networking market.Jankov provides a detailed look at what the exact advantages of NetLogics processors are, and why they are so important to mobile networking infrastructure.

Essentially, Broadcom's customers are constantly vying to be faster, while also reducing costs by building smaller.To solve this problem, NetLogics processor allows eight chips to work together seamlessly, providing fast and reliable performance.This is vitally important, as cell phone service providers want processors that can run all of the functions of an LTE network on a system in places where the power grid is unreliable.The ability to run these towers in less-than-ideal conditions will allow Broadcom to gain an early foothold into emerging markets.

Watch the rest of the series: 
A Strategic Partnership: Broadcom &amp; NetLogic (Part 1 of 3)
A Strategic Partnership: Broadcom &amp; NetLogic (Part 3 of 3)</description>
      </item>
      <item>
         <title>Return to Sender - IPv6 Saves the Day</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/return-to-sender-ipv6-saves-the-day/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/return-to-sender-ipv6-saves-the-day/</guid>
         <pubDate>June 13, 2012</pubDate>
         <description>Imagine if the world ran out of home addresses.New homes would be built without addresses, making it difficult for packages to be delivered or friends to stop by for a visit.This is the sort of problem that the Internet is facing today.Like homes, devices such as PCs, smartphones and tablets each hold individual IP addresses that are used to route data. [caption id=&quot;attachment_2977&quot; align=&quot;alignright&quot; width=&quot;240&quot;] A Google Engineer celebrated IPv6 in a Google+ post (via CNET)[/caption] The World Wide Web currently operates on Internet Protocol Version 4, or IPv4, using a system of addresses with four sets of digits e.g.985.635.3.4 or 958.625.471.1.The problem is that there are only so many four-number combinations to go around.With the adoption of mobile devices and demand for Internet services continuing to rise, the Internet is starting to run out of IP addresses. Enter IPv6.On June 6, the Internet Society and numerous companies such as Google, Cisco and Facebook participated in World IPv6 Day the official launch of IPv6.The new protocol will allow many more devices to be connected to the Internet by greatly increasing the number of possible addresses.Without IPv6, new devices wouldnt be able to connect to the Internet leaving countless people, particularly in emerging markets without Internet access. According to a recent study, there were nearly 2.3 billion Internet users worldwide last year, with China and the U.S.topping the list with 513 million users and 245 million users, respectively.Yet, only about a third of the worlds population has access to the Internet. While IPv6 solves the IP address challenge, it requires the installation of new IPv6 compatible hardware and software. Since IPv6 was standardized in 1999, Broadcom has been laser focused on developing IPv6-ready products.Today, the full portfolio of Broadcom's enterprise-class StrataXGS switches support IPv6. The majority of the worlds infrastructure hardware</description>
      </item>
      <item>
         <title>Broadcom is Recognized for Networking Innovation</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/broadcom-is-recognized-for-networking-innovation/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/broadcom-is-recognized-for-networking-innovation/</guid>
         <pubDate>November 15, 2012</pubDate>
         <description>The data network is evolving  and managing the insatiable demand for data, as well as the aggressive requirements of next-generation networks, is no easy task.But Broadcom, with its innovative offerings, continues to give network operators the freedom to adapt to emerging protocols without the redeployment of network equipment and the associated expenses.

In particular, Broadcom's BCM88030, a fully programmable network processor unit (NPU) that delivers more than twice the throughput of other NPUs on the market, offers the enhanced high-bandwidth capacity and scale that operators need.At the heart of the offering is a robust microcode development environment (MDE) that allows users to customize a device via an easy-to-use graphical interface, comprehensive debugging tools, and a complete simulation model of the device.



As much as we like to help companies make their networks more robust and customizable, as well as more efficient and cost effective, a bit of outside recognition every now and then is nice, too.This week, Broadcom's BCM88030 was named a finalist in the Core Innovation Category of the 2012 Fierce Innovation awards.

The awards recognize companies and products that are defining the future of the broadband communications industry.The judges, comprised of a panel of operators, consider technology innovation, financial impact, market validation and end-user customer experience.

The Fierce Innovation Awards are offered by the publishers of tech news outlets FierceWireless, FierceTelecom and FierceCable.</description>
      </item>
      <item>
         <title>SDN: A Sea Change in the Data Center</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/sdn-a-sea-change-in-the-data-center/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/sdn-a-sea-change-in-the-data-center/</guid>
         <pubDate>May 7, 2013</pubDate>
         <description>Short for Software Defined Networking, SDN has become the latest enterprise tech acronym to buzz around the data center and rightfully so. Information technology professionals are excited about the potential benefits of turning the complex task of provisioning, optimizing and monitoring network traffic over to software. Its one of the main themes of this years Interop trade show, which kicked off this week in Las Vegas and is set to feature workshops and panels that will ask crowds of network administrators and other IT professionals to consider how, and to what degree, their organizations should invest in this paradigm shift in networking design and deployment. See News from Broadcom at Interop. In a nutshell, SDN gives administrators a programmable and customizable interface for controlling and orchestrating the operation of a collection of devices at different levels of abstraction.SDN is all about performance, agility and effective asset utilization. Its a new and evolving concept, one that Broadcom is actively helping to standardize and exemplify.As such, Broadcom is participating in discussions with groups that have an interest in achieving smarter and more flexible networking through apps, whether atop software controllers or in switches for network automation or virtualization.Those groups include networking hardware makers, consortia and software projects, including the VXLAN and NVGRE specifications and Open Networking Foundations OpenFlow specification.The early Network Virtualization Overlays and first OpenFlow-enabled systems have been developed on Broadcom-based switches. Just how will SDN reduce costs while also improving efficiency and productivity? Because SDN is still in its infancy, the full breadth of ways that SDN can benefit companies remains an open frontier. Previously: Interop Preview: Network Infrastructure in the Spotlight But, consider the amount of time and energy that goes into manual network management and configuration just to provision virtualization overlays or integrate with a service provider network.With</description>
      </item>
      <item>
         <title>Chris O'Reilly in EDN Magazine: &quot;As Security Threats Evolve, Innovation at the Silicon Level is Critical.&quot;</title>
         <link>https://www.broadcom.com/blog/chris-oreilly-in-edn-magazine-as-security-threats-evolve-innova</link>
         <guid>https://www.broadcom.com/blog/chris-oreilly-in-edn-magazine-as-security-threats-evolve-innova</guid>
         <pubDate>April 4, 2014</pubDate>
         <description>Editor's Note: Broadcom experts often weigh in on popular topics on industry sites around the Web. Below is a reprint of a story that appeared in EDN Magazine, in which Chris O'Reilly, senior director of product marketing, Infrastructure and Networking Group at Broadcom, talks about the increased need for chip level security in server technology. From EDN: Fast, reliable network connectivity is at the heart of business today powering critical infrastructure systems, internal business operations, customer-facing communications and home-based entertainment services. But its not only system performance that keeps network managers awake at night. As more people embrace multiple connected devices through a wide range of applications, security vulnerabilities are top-of-mind for both network managers and network hardware designers. As the type and scope of network traffic continues to evolve, so does the complexity of security threats. It is more important than ever to address greater levels of security at all points within these complex and varied network environments. Critical infrastructure networks (such as financial transactions and power plants) clearly require increased protection. But even lower level networks must take greater care to protect personal information that may become exposed during everyday transactions. Emerging network platforms in the cloud, home gateways, and mobile enterprise have opened additional avenues for threats against data security and system performance. Even the simple process of uploading a photo to the cloud much less using it to transmit enterprise data requires the image to be secure at the device level, in the cloud, and at all points between as it traverses the network itself. As security threats continue to evolve and network providers vie for customers interested in high-performance, seamless security at every point within the network, innovation at the silicon level is critical. High performance security features integrated into silicon hardware allow network managers</description>
      </item>
      <item>
         <title>Android Taps Broadcom Software for Near Field Communications</title>
         <link>https://www.broadcom.com/blog/wireless-technology/android-taps-broadcom-software-for-near-field-communications/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/android-taps-broadcom-software-for-near-field-communications/</guid>
         <pubDate>November 14, 2012</pubDate>
         <description>Near Field Communication (NFC) technology has been touted as the key to unlocking mobile payments.And although the digital walletwhere payment information can be transmitted with the tap of a smartphone is predicted to be the big future payment method of choice, potential for this transformative technology is much greater and broader, with more immediate applications today at the device-to-device level. Now, NFC technology is about to spread it wings and push its way into the mainstream.Google, which is committed to an open and forward-looking software ecosystem with its Android mobile operating system, has selected Broadcom's open NFC software stack for all Android-based devices, including the new Google Experience Devices the Nexus 10 tablet and Nexus 4 smartphone, which were announced October 29 as part of the update to the Android operating system. Understanding NFC A quick explainer for those unfamiliar with NFC: its a short-range radio technology (shorter than Bluetooth, for example) that consists of two chips, a reader and a tag.It allows devices, such a smartphone, to transmit data to an NFC-enabled reader, such as a cash register, for a connection to occur. That connection can be much more than just money changing hands in a transaction.It potentially could allow consumers to tap-to-share and tap-to-stream content seamlessly from one NFC-enabled device to another.That means instant pairing (say, of a headset to a TV) and instant sharing (say, of photos captured on a smartphone, but displayed on a TV). NFC is also instrumental to the further development of contactless services that are currently in the works the ability to swipe a bus pass, exchange virtual loot while gaming or get up-to-the-minute coupons while out shopping, for example. Spurring NFC Adoption Theres definitely some excitement around whats possible in this next mobile frontier.Usage is expected to explode in the next few</description>
      </item>
      <item>
         <title>Broadcom's NFC Technology Powers Brother's New Tap-to-Print Feature</title>
         <link>https://www.broadcom.com/blog/wireless-technology/broadcoms-nfc-technology-powers-brothers-new-tap-to-print-feature/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/broadcoms-nfc-technology-powers-brothers-new-tap-to-print-feature/</guid>
         <pubDate>July 11, 2013</pubDate>
         <description>We may live in a digital world but for some tasks, the old-school paper printout is still important.

Printers have come a long way, bringing Wi-Fi, Bluetooth and even cloud-based connectivity for easier printing from a number of devices.Still, the process of pairing a device to a printer remains cumbersome  logging in to wireless access, adding the printers to your system, getting the right security credentials and so on.

Those obstacles will soon be history.

This week, Brother announced a new line of Wireless LAN-enabled printers that utilize Near Field Communication (NFC) technology to enable tap-to-print and tap-to-scan capabilities, paving the way for users of compatible NFC smartphones and tablets  such as the Samsung Galaxy S4 and Google Nexus devices to simply tap a mobile device to one of these printers to get a crisp paper copy of a document that otherwise would be trapped inside the device.

Powering this next-gen use case is Broadcom's embedded Near Field Communication (NFC) controller chip and Wi-Fi connectivity chips, both of which are designed for maximum interoperability between devices  whether they be two smartphones, or in this case, a smartphone and a printer.The printing and scanning are achieved by creating a secure connection between the phone and printer with NFC and then transferring the file over Wi-Fi.The only thing left to do is press the Print button.

Brothers embrace of NFC shows the continued adoption of this technology into new applications that effortlessly pair more devices for consumers in the home, office and beyond.

Related

	Android Taps Broadcom Software for Near Field Communications
	NFC Ready for Mainstream Adoption with New Combo Chip
	Making the Smartphone Switch: Multi-Core Does So Much More
</description>
      </item>
      <item>
         <title>How Broadcom's HULA Helps GPS Work Better in Crowded Cities</title>
         <link>https://www.broadcom.com/blog/wireless-technology/how-broadcoms-hula-helps-gps-work-better-in-crowded-cities-2/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/how-broadcoms-hula-helps-gps-work-better-in-crowded-cities-2/</guid>
         <pubDate>September 20, 2013</pubDate>
         <description>With global positioning capabilities baked into most modern smartphones, its easy to take GPS for granted.Yet, pinpointing a location to a detailed spot on the planet is actually a lot more complicated than it looks. While GPS works great in most situations, it can be less than optimal in dense urban environments, such as the crowded downtown areas of major cities where billions of people live and work. These so-called urban canyons can reflect and even block GPS signals, causing significant errors.In these kinds of environments, with very few direct satellite views, unaided GPS can be off anywhere from tens of meters to even kilometers in some cases. User expectations for accuracy are very high, and we are dealing with signals that are reflected or blocked, said Steven Malkos, senior program manager in the Mobile &amp; Wireless Group at Broadcom.No matter how many satellites we put in the sky, GPS alone will not solve the downtown dense deep urban problem. Related: Broadcom's Latest GPS Tech Zooms in on Geofencing Thats where an innovative Broadcom-built technology comes in. The Hybrid Universal Location Application (HULA) takes the GPS and GNSS (Global Navigation Satellite System) data and combines it with the location information collected from the sensors on various other sources so it can better triangulate a specific location. These sensors, which engineers call Micro-Electro-Mechanical Systems (MEMS), have been a hot topic of late.Last week, Malkos spoke at a conference devoted to exploring MEMS-based innovations and a blog post in the EETimes quoted him as saying, Broadcom has been working on HULA since 2006. Now it provides superior accuracy even in the canyons of cities where buildings are often in the way of GPS signals. Although the conference discussion gets pretty techie, Malkos described how MEMS sensors on the smartphone - including accelerometers,</description>
      </item>
      <item>
         <title>Reinier van der Lee in Re/Code: &quot;Cut the Cord for Good&quot;</title>
         <link>https://www.broadcom.com/blog/wireless-technology/reinier-van-der-lee-in-recode-cut-the-cord-for-good/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/reinier-van-der-lee-in-recode-cut-the-cord-for-good/</guid>
         <pubDate>June 13, 2014</pubDate>
         <description>Editors Note: Broadcom experts often weigh in on popular topics on industry sites around the Web.Below is a reprint of a story that appeared in Re/Code, in which Reinier van der Lee, Director of Product Marketing at Broadcom, talks about the future of wireless charging. From Re/Code: The Internet of Things looms as possibly the most disruptive shift in technology since the creation of the Internet itself.Analysts estimate that up to 30 billion devices will be wirelessly connected by 2018.According to recent reports, the average U.S.household already charges up to 10 devices at any one time, and that number is expected to rise as the number of connected devices continues to surge.Imagine charging multiple devices, including a smartphone, tablet and smartwatch on a single surface no more fussing around with multiple chargers and outlets. Wireless power, which allows users to charge multiple electronic devices without the use of a cable, promises to finally cut the cord for good.While wireless power technology has been around for some time, its evolution from first-generation inductive technology to second-generation resonant technology is now promising to take it mainstream. With inductive technology, two coils are required a transmitter and a receiver.An alternating current is passed through the transmitter coil, generating a magnetic field that induces a voltage in the receiver coil that is used to power a mobile device or charge a battery.And while inductive technology has certainly helped create an interest in wireless charging, it is not without its limitations.Inductive technology only allows the user to charge one device at a time, and that device must be precisely aligned on a charging pad. Limitations in inductive technology are not the only reason wireless power has failed to take off.Confusion and frustration over multiple competing standards from different organizations have also muddied the waters.For wireless</description>
      </item>
      <item>
         <title>Ready, Set, Code: Android Comes to Wearables with Broadcom Tech in Tow</title>
         <link>https://www.broadcom.com/blog/wireless-technology/ready-set-code-android-comes-to-wearables-with-broadcom-tech-in-tow/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/ready-set-code-android-comes-to-wearables-with-broadcom-tech-in-tow/</guid>
         <pubDate>March 20, 2014</pubDate>
         <description>Wearables are a different animal altogether from other mobile consumer electronics like smartphones and tablets especially when it comes to developing seamless experiences for users. Lets look at form-factor, for one.These sleek little wristwatches, arm bands, fitness trackers and health monitors need to actually be wearable.Chip size matters a lot when devices are designed to be small, light and durable. They also need to consume minimal power and enable always on apps to work in the background for lengthy periods of time without a battery recharge.Thats especially important for fitness apps that can provide real-time speed, distance and time information on your wrist, or perhaps, for apps that depend on location information to push relevant content. Broadcom engineers have zeroed in on these two vectors power consumption and size to make a difference in how wearable devices are getting to market.Broadcom's expertise in these two areas make it a key partner to Google, which this week announced its Android Wear platform for wearable devices. [caption id=&quot;attachment_10258&quot; align=&quot;alignright&quot; width=&quot;321&quot;] Click to expand infographic: Learn more about Wearable connected devices with technologies powered by Broadcom.[/caption] Android Wear has a big, bold vision.The idea is to enable this new class of connected devices to understand the context of the world around you, and for consumers to interact with them simply and efficiently, with just a glance or a spoken word. Thats a tall order for the Android developer community, which is abuzz with all the possibilities for novel types of consumer experiences with wearables. While it didnt disclose any product details, Google did say its working with the industrys top chip- and software-makers including Broadcom to make this vision a reality. Broadcom, a longtime Android partner for mobile and more recently, for automotive, has all of the components in place to make it</description>
      </item>
      <item>
         <title>RSDB Gets a Boost from Broadcom at MWC</title>
         <link>https://www.broadcom.com/blog/wireless-technology/rsdb-gets-a-boost-from-broadcom-at-mwc/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/rsdb-gets-a-boost-from-broadcom-at-mwc/</guid>
         <pubDate>March 2, 2015</pubDate>
         <description>The newest smartphones and wearables from the likes of Samsung, HTC, Sony and other mobile titans are expected to dominate headlines from this weeks Mobile World Congressshow in Barcelona. But, behind the all that flashy tech is something most consumers take for granted: robust wireless connectivity that enables them to video chat, surf the web, run dozens of applications and connect with social networkssimultaneously. At Mobile World Congress, Broadcom will be talking up its multi-antenna Wi-Fi technology, which is expected to enable next generation mobile devices to reach faster speed and wider wireless range -- at a lower power budget than ever before. Broadcom touts a long history of leadership in Wi-Fi: It announced the industrys first 802.11n Wi-Fi chip in 2006, the first 802.11ac 5G WiFi chip in 2012 and the first 5GWiFi MIMO combo chip in 2013. The company announced today the BCM4359, the industrys first 5G WiFi/Bluetooth combo chip with whats called Real Simultaneous Dual Band (RSDB) support. Designed for high-performance smartphones and tablets, the new chip allows consumers to run applications on two bands simultaneously, which means a user can stream YouTube content to a smart TV while the same user plays a game on their smartphone screen. MIMO (pronounced My-Mo) uses two, four, or six antennas to transmit and receive two, four, or six streams of data in parallel, boosting throughput and wireless range. When combined with Broadcom's latest 4X4 MU-MIMO chip for routers (which were announced during the International Consumer Electronics Show in January), the new chip delivers end-to-end MU-MIMO (thats Multi-User-MIMO) for a big performance upgrade. The added performance comes from combining 5G WiFi with MU-MIMO technology, which enhances networking speeds by enabling routers to simultaneously communicate with multiple devices.Older routers, by contrast, only communicated with one device at a time. That means</description>
      </item>
      <item>
         <title>Power Up with AirFuel: Wireless Charging Set to Accelerate</title>
         <link>https://www.broadcom.com/blog/wireless-technology/power-up-with-airfuel-wireless-charging-set-to-accelerate/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/power-up-with-airfuel-wireless-charging-set-to-accelerate/</guid>
         <pubDate>November 16, 2015</pubDate>
         <description>The long-held vision for wireless charging where consumers juice up their devices nearly anywhere they shop, eat, ride or sleep is finally coming together. Earlier this month, two big standards-setting bodies joined together under a common name with the backing of nearly 200 member companies, including Broadcom.Their goal is to codify technology standards across the product development ecosystem to make wireless charging mainstream. [caption id=&quot;attachment_15501&quot; align=&quot;alignright&quot; width=&quot;300&quot;] Photo Credit: AirFuel Alliance[/caption] The AirFuel Alliance, as the newly-branded group is called, replaced the Alliance for Wireless Power (A4WP) and Power Matters Alliance (PMA).Industry watchers can expect to see this new brand show up on a number of wireless charging products over the next few months, including charging pads, mats and cases for consumer electronics. The new group identity emerges at a critical market inflection: Consumers are finally becoming acquainted with the benefits of on-the-go, contact-based charging for their devices and the idea of leaving cords and cables behind. Wireless charging is as hot as its ever been, in large part because standards bodies, manufacturers and retailers all seem keen on moving it along, said Reinier van der Lee, a product line director in the Broadband &amp; Connectivity Group at Broadcom. Wireless charging is popping up in a whole host of places: hotels, stores, airports, cars and restaurants, to name a few. The technology is already showing up in some 200 Starbucks in San Francisco and around the United Kingdom, where customers can charge their smartphones and other devices on small circular Powermat stations built into tables.The 2016 Kia Optima and vehicles from the BMW 7 series can power a favorite device on charging pads built into the cars center consoles.And Ikea has released a nightstand that has built-in wireless charging capabilities. Likewise, a growing number of smartphones and wearables are also</description>
      </item>
      <item>
         <title>Broadcom Opens 4G World with Best of 4G Award</title>
         <link>https://www.broadcom.com/blog/broadcom-opens-4g-world-with-best-of-4g-award</link>
         <guid>https://www.broadcom.com/blog/broadcom-opens-4g-world-with-best-of-4g-award</guid>
         <pubDate>October 30, 2012</pubDate>
         <description>Its our first year at the 4G World conference and were happy to report that well be bringing home an award. At a ceremony during the Chicago conference last night, Broadcom received a Best of 4G award in the Mobile Backhaul and Core Network Innovation category.The Best of 4G Awards recognize companies driving the adoption of 4G mobile broadband.All submissions were reviewed by an independent panel of judges. Broadcom won for its BCM56240 small cell switch solution that accelerates roll-out of high bandwidth indoor and outdoor wireless networks for operators and service providers as they upgrade legacy mobile networks.The BCM56240 is also the worlds first small cell Ethernet solution to integrate traffic management and deep packet buffering in a single chip. The Best of 4G Awards celebrates excellence in 4G innovation, said Joe Braue, General Manager of 4G World.The caliber of entries received in this years awards was very high, and we congratulate Broadcom for demonstrating outstanding innovation and leadership with their BCM56240 small cell switch solution. During this weeks show, Broadcom is demonstrating its full lineup of mobile network productsswitches, processors and offerings in the wireless infrastructure and microwave spaceincluding its latest development board.The new BCM51030 digital front end (DFE) development board provides all the components necessary for a complete transmit loop, with the exception of the customer's selected driver and power amplifier.Boards are now available with 1, 2, or 4 transmit channels and can support any protocol combination, any power amplifier technology or architecture and any frequency band with an appropriate analog filter. Spanning the networkfrom the access point to the edge, to the aggregation and the coreBroadcom provides high quality voice connections, faster app downloads and uninterrupted video streaming that supports the mobile experience that consumers crave. Well will also be sharing some expertise at the show.Ran</description>
      </item>
      <item>
         <title>Innovative solutions for tomorrow’s wireless ecosystem</title>
         <link>https://www.broadcom.com/blog/innovative-solutions-for-tomorrows-wireless-ecosystem</link>
         <guid>https://www.broadcom.com/blog/innovative-solutions-for-tomorrows-wireless-ecosystem</guid>
         <pubDate>November 29, 2016</pubDate>
         <description>Wireless connectivity has become an integral part of modern day life as mobile devices have made it easier for people to live, work and play. Modern mobile devices connect to an extensive ecosystem of wireless networks and connected devices, enabling everyday wireless applications like voice/text communications, location-based applications, wireless audio, video streaming and internet connectivity. Broadcom leads the industry in technical innovation across many wireless disciplines and offers a unique product portfolio addressing today’s most challenging problems facing the wireless industry. Indeed, “connecting everything” is more than just corporate jargon at Broadcom. Broadcom is uniquely equipped with leading-edge technologies for LTE Advanced, Wi-Fi, Bluetooth and GNSS applications. From advanced RF front end solutions that optimize cellular communications performance to leading-edge connectivity solutions that maximize wireless data transfer, the advantages of Broadcom wireless solutions are manifested in end products through better user experience: strong signal reception, high data throughput, fast wireless connection, accurate GPS and navigation, and long battery life. LTE Advanced Broadcom is the industry leader in LTE CA RF front-end solutions for mobile handsets. Leveraging unique FBAR filter, best-in-class PA and ultra-low noise figure LNA technologies with in-house RF module expertise, Broadcom provides the most advanced portfolio of LTE CA RF front-end solutions enabling mobile data aggregation of more than 25 frequency segments and delivery of optimum data transfer and power efficiency. Wi-Fi Broadcom offers leading edge Wi-Fi solutions for mobile clients and infrastructure devices. Broadcom has led the Wi-Fi industry through every major technology change and shipped more than 3 billion units of Wi-Fi/Bluetooth combo chips since 2008. Bluetooth Broadcom offers advanced Bluetooth solutions as part of the Wi-Fi/Bluetooth combo portfolio. Advanced coexistence mechanism ensures that Bluetooth fidelity is maintained despite operating in the same spectral band as Wi-Fi and near strong LTE signals. GNSS Broadcom is a</description>
      </item>
      <item>
         <title>GaN Transistor Gate Drive Optocouplers</title>
         <link>https://www.broadcom.com/blog/gan-transistor-gate-drive-optocouplers</link>
         <guid>https://www.broadcom.com/blog/gan-transistor-gate-drive-optocouplers</guid>
         <pubDate>May 5, 2017</pubDate>
         <description>Gallium Nitride (GaN) power semiconductors are rapidly emerging into the commercial market delivering several benefits over conventional Silicon-based power semiconductors. GaN can improve overall system efficiency and the higher switching capability can reduce the overall system size and costs. The technical benefits coupled with lower costs have increased the fast adoption of GaN power semiconductors in applications like industrial power supplies and renewable energy inverters. Broadcom gate drive optocouplers are used extensively in driving Silicon-based semiconductors like IGBT and Power MOSFETs. Optocouplers are used to provide reinforced galvanic insulation between the control circuits and the high voltages. The ability to reject high common mode noise will prevent erroneous driving of the power semiconductors during high frequency switching. This paper will discuss how the next generation of gate drive optocouplers can be used to protect and drive GaN devices. Advantages of GaN Gallium Nitride is a wide bandgap (3.4 eV) compound made up of Gallium and Nitrogen. Bandgap is a region formed at the junction of materials where no electron exists. Wide bandgap GaN has high breakdown voltage and low conduction resistance characteristics. Unlike conventional Si transistor that requires bigger chip area to reduce on-resistance, GaN device is smaller in size. This reduces the parasitic capacitance which allows high speed switching and miniaturization with ease. The low conduction resistance is achieved because the on-resistance of the power semiconductor is inversely proportional to the cube of the electrical breakdown. In other words, it is expected that GaN device will have an on-resistance approximately 3 digits lower than the limit of that of Si device. In addition, GaN device has high electron saturation velocity that makes it suitable for high-speed applications. &gt;&gt;&gt;&gt;&gt; place Figure 1 here &lt;&lt;&lt;&lt;&lt; Figure 1. Silicon vs. GaN transistor structure and size Power semiconductor is the key device and</description>
      </item>
      <item>
         <title>Automotive electronics: Embracing the disruptive future</title>
         <link>https://www.broadcom.com/blog/automotive-electronics-embracing-the-disruptive-future</link>
         <guid>https://www.broadcom.com/blog/automotive-electronics-embracing-the-disruptive-future</guid>
         <pubDate>October 10, 2017</pubDate>
         <description>The automotive industry has witnessed unprecedented changes in the last two decades driven heavily by the rapid adoption of in-vehicle electronics. Modern vehicles are equipped with advanced safety, security, infotainment, and a whole host of features for comfort and convenience to make driving easier, safer and more enjoyable. As vehicles become increasingly advanced and connected, semiconductors will play an ever more important role in the automotive ecosystem. The next several decades promise more semiconductor opportunities and challenges as new and emerging technologies are introduced to the automotive market. Numerous disruptive technologies such as artificial intelligence (AI), augmented reality (AR), V2X communications, and autonomous driving are starting to revolutionize the automotive industry. Drivers will be able to rely on AI to take over the steering wheel in the event they are not able to drive. Future cars will be equipped with AR capabilities in the windshield to provide unobstructed views of the vehicle surroundings to help make driving safer and easier. V2X technology will connect cars together wirelessly and enable an efficient traffic controlling ecosystem that optimizes traffic flow and minimizes accidents. As auto manufacturers embrace digital disruption, Broadcom continues to drive innovation and develop new solutions to advance next generation automotive electronics. Broadcom’s Automotive Solutions In-Vehicle Infotainment—From LTE to GPS, Broadcom provides a variety of high-performance RF components addressing various technical challenges in the RF front end. Broadcom’s BroadR-Reach® and fiber optic MOST Tx/Rx solutions provide reliable and robust connectivity delivering optimized multimedia of entertainment and information. Broadcom’s LED and optical sensing solutions can greatly enhance the user interface with unique lighting and gesture sensing, enabling a richer and more connected infotainment experience. Advanced Driver Assistance System—Broadcom’s BroadR-Reach and fiber optic MOST Tx/Rx solutions enable robust connectivity and networking of cameras, sensors, and application processors that serve a multitude of</description>
      </item>
      <item>
         <title>Computex 2015: Home Routers Advance with 5G WiFi XStream</title>
         <link>https://www.broadcom.com/blog/computex-2015-home-routers-advance-with-5g-wifi-xstream</link>
         <guid>https://www.broadcom.com/blog/computex-2015-home-routers-advance-with-5g-wifi-xstream</guid>
         <pubDate>June 2, 2015</pubDate>
         <description>To the average tech consumer, eight-stream 802.11ac Wi-Fi with MU-MIMO may sound a bit like alphabet soup. But for tech-savvy early adopters, power users and pro-sumers who likely have more than a dozen wireless devices connected to their home wireless networks it sounds less like a jumble of acronyms and more like music to their ears. In a high-tech home, Wi-Fi networks are tasked with serving computers, tablets, smartphones, wearable devices and increasingly, home appliances such as light fixtures, audio systems and thermostats.Throw in a smart TV, and perhaps, a media gateway or set-top box that can send high-bandwidth, 4k content to an Ultra HD display, and consumers could face a serious slowdown on their home wireless network. Consumers are bringing more connected devices into their homes than ever before, which brings concerns about congestion on their networks and degraded performance, said Manny Patel, senior director of product marketing, Wireless Connectivity, at Broadcom.With 4k content coming down the pipeline, they need a robust traffic cop to manage their household bandwidth needs. That brings us to MU-MIMO, which stands for multi-user, multiple input-multiple output.Its an engineering feat that combines antenna and radio technologies to enable more connected devices to receive data simultaneously. Broadcoma longtime leader in Wi-Fiannounced its first 5G WiFi XStream MU-MIMO offering last year, and followed it up with a suite of 802.11ac systems-on-a-chip for set-top boxes, media gateways and router platforms at Januarys Consumer Electronics Show. At Computex in Taipei this week, Broadcom made a big leap forward with a dual-band, eight-stream 5G WiFi platform for high-performance routers, which promises multi-gigabit downloads that make it the fastest router platform available today. With 5G WiFi XStream pentacore platform, Broadcom is able to improve wireless networking efficiency and deliver an 8X improvement in speed versus standard dual-band routers. Broadcom is</description>
      </item>
      <item>
         <title>Gallery: The sights of Broadcom at Interop</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/gallery-the-sights-of-broadcom-at-interop/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/gallery-the-sights-of-broadcom-at-interop/</guid>
         <pubDate>May 9, 2012</pubDate>
         <description>The halls of the Mandalay Bay Convention Center in Las Vegas have been bustling with activity this week as IT professionals gathered for the Interop conference.Near the center of the show expo floor, Broadcom's booth was an attraction as folks converged to learn more about Broadcom's innovative technologies that are transforming the data center and giving enterprise companies more flexibility, more potential for cost savings and an opportunity to operate with greater energy efficiency.

Check out the full photo gallery on Broadcom's Facebook page.


Full Coverage: Broadcom at Interop 2012

	Broadcom at Interop: Power Consumption Technology Plays Important Role
	Broadcom at Interop: Energy Efficient Ethernet is Good for the Planet
	Technology Moving at the Speed of Life: Broadcom Enables Massive Network Scalability
	Enterprise 2.0: Broadcom puts Network Managers in the Fast Lane
	Broadcom at Interop: Next-Generation Data Centers Shift into High Gear
	Broadcom at Interop: Unprecedented Innovation for Next Gen Data Center, Green IT and Enterprise 2.0
	Broadcom at Interop: Knowledge-Base and Multi-Core Processors Complete Broadcom Portfolio
</description>
      </item>
      <item>
         <title>A Strategic Partnership: Broadcom &amp; NetLogic (Part 1 of 3)</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/a-strategic-partnership-broadcom-netlogic-part-1-of-3/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/a-strategic-partnership-broadcom-netlogic-part-1-of-3/</guid>
         <pubDate>August 10, 2012</pubDate>
         <description>Broadcom acquired NetLogic Microsystems Inc. in May of 2012.The Santa Clara-based company was incorporated into Broadcom's Infrastructure and Networking Group (ING) to provide a more complete solution for mobile infrastructure  including switches, microwave backhaul and more.

In this video, I talk with Broadcom's Rajiv Ramaswami (Executive Vice President, General Manager, ING) and Ron Jankov (formerly NetLogics CEO, now Senior VP and GM of Processors and Wireless Infrastructure in the ING business unit) about the perks of the new partnership, and what makes the acquisition of NetLogic  the largest in Broadcom's history  so technologically important.

The interview touches on how NetLogics knowledge-based and embedded processors will help Broadcom adapt to the rapidly changing mobile infrastructure market.The deal allows Broadcom to provide more competitive chips and boost market share.

NetLogics embedded processors also improve the value of Broadcom's broad IP portfolio, with the current $3.5B segment projected to grow to $5.2B by 2015, as global demand for connected devices and greater data loads continues to grow.By providing an integrated solution, Broadcom will be the chipmaker of choice for companies looking to provide their customers with improved speed and performance.Service providers want more reliable service that follows their customers wherever they go  and now they will be able to build powerful small cells that use Broadcom's connectivity prowess and the NetLogics speedy processors in one power-efficient solution.


Watch the rest of the series: 
A Strategic Partnership: Broadcom &amp; NetLogic (Part 2 of 3)
A Strategic Partnership: Broadcom &amp; NetLogic (Part 3 of 3)
 </description>
      </item>
      <item>
         <title>Appetite for Bandwidth: Broadcom Chip Doubles Network Processing Power</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/2189/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/2189/</guid>
         <pubDate>April 26, 2012</pubDate>
         <description>Earlier this week, Broadcom came together with other top tech leaders and more than 50 international technology reporters and industry analysts for the annual Globalpress 2012 Electronics Summit in Santa Cruz, California.

At the event, the company launched the industrys first 100 Gbps full duplex network processor unit (NPU): a powerful chip that delivers more than twice the throughput of any NPU on the market.This new chip  with unparalleled integration  addresses the overwhelming appetite for bandwidth that stems from the growing number of connected devices that are accessing everything from streaming video, Facebook updates and mobile apps.Ciscos virtual networking index forecasts an estimated 47 billion connected devices by 2015.

Handling that sort of traffic will require technology thats more powerful and robust.Writer LG Nilsson with VR-Zone.com recently wrote:
It's easy to forget that the Internet is connected via a vast amount of switches and routers and with an ever increasing bandwidth demand as we get more and more connected devices, this backbone of the internet needs constant upgrades.
In turn, service providers must transform their networks by adopting higher bandwidth links.Its an overall business trend with a major impact, as Bloomberg and Businessweek have reported.

The industry is buzzing on Broadcom's news.Check out some of the articles about the company's announcement:

	EE Times: Broadcom Aims to Spread 100-Gbit Ethernet with Single-Chip Solution

	Network Computing: Broadcom Enables Programmable 100GbE Switches
	Data Center Knowledge: Broadcom Introduces 100Gbps Duplex Network Processor
	ElPort.News: First 100Gbps Full Duplex Network Processor
	Softpedia: Broadcom Unveils Worlds Most Powerful Network Processor

 </description>
      </item>
      <item>
         <title>Trident-II+ Switches Bringing Flexible Data Center Capabilities to the Enterprise</title>
         <link>https://www.broadcom.com/blog/trident-ii-switches-bringing-flexible-data-center-capabilities-</link>
         <guid>https://www.broadcom.com/blog/trident-ii-switches-bringing-flexible-data-center-capabilities-</guid>
         <pubDate>April 20, 2015</pubDate>
         <description>Companies already know about the avalanche of data traffic thats expected to overwhelm their data centers in the coming years. Social networking, streaming video, the expansion of public and private clouds and the addition of many new connected devices all add to the continuous drumbeat of more users and more content driving the need for more bandwidth and more Ethernet. With Cisco heralding some 24,000 petabytes of data per month flooding businesses networks by 2018, data center managers are attempting to stem the coming data tide with scalable growth strategies that make the most of their investments. Broadcom has long been familiar with whats at the top of their wish lists: a network infrastructure solution thats based on standard, power-efficient hardware that can handle virtualization, scalability, security and offers an inexpensive path for upgrading in the future. Recently, Broadcom introduced the StrataXGS Tomahawk Series, which allows not only the speed but the scale and the ability to lower the cost of running those enormous cloud-scale networks. But when it comes to serving the needs of the enterprise, Broadcom's StrataXGS Trident family of switching SoCs, the most pervasive switch SoC in the industry, is going further to help solve the pain points of the everyday network administrator. Today, Broadcom is building upon its award-winning Trident lineup with the launch of the StrataXGS Trident-II+ switch portfolio which will deliver additional benefits of the market-leading StrataXGS Trident switching family to an even wider set of enterprise networks including mega-scale data centers. The Pluses Behind Trident-II+ One of the reasons the Trident-II lineup is the industry-leading choice for data center operators is its ability to ease the bandwidth crunch and enable network-wide virtualization of the IT infrastructure, which have been pain points for data center managers. Rather than contend with several different packet processors,</description>
      </item>
      <item>
         <title>NFC Forum Chooses Broadcom's New NFC Solutions to Validate Certification Test Bed</title>
         <link>https://www.broadcom.com/blog/wireless-technology/nfc-forum-chooses-broadcoms-new-nfc-solutions-to-validate-certification-test-bed/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/nfc-forum-chooses-broadcoms-new-nfc-solutions-to-validate-certification-test-bed/</guid>
         <pubDate>October 26, 2011</pubDate>
         <description>Broadcom's Participation in Development of Certification Platform Underscores Commitment to Driving Interoperability for Mobile Payments and Simplified Connectivity News Highlights: NFC Forum leverages Broadcom's new BCM2079x NFC product family to validate its certification test suite Broadcom's NFC solutions chosen as the benchmark for the test tools that the NFC Forum will use to guarantee compliance Partnership underscores Broadcom's commitment to driving industry standards Today at 4G World, the largest global 4G Expo, Broadcom announced a partnership with the NFC Forum to help develop a new NFC certification test suite based on Broadcom technology.NFC Forum's official certified test tools will leverage Broadcom's recently introduce BCM2079x NFC family of products as the benchmark for validation. Meeting this benchmark will be an important step in the NFC Forum Certification Program, which provides device manufacturers with a means of establishing that their products conform to published, industry standard specifications.So the NFC Forum test tools will play a key role, ensuring compliance and interoperability of future NFC products and solutions. In addition, companies can place the NFC Forum N-Mark on the device indicating the touch point to trigger NFC services, can be included in the NFC Forum's certified product register, and gain global credibility. To further educate the mobile ecosystem on the importance of standards and the NFC Forum certification process, Mohamed Awad, member of the NFC Forum board of directors and associate product line director for NFC products at Broadcom Corporation, will participate in a session, NFC Forum Certification: The Key to Interoperable Innovation, at the NFC Summit at 4G World 2011.His session will address common questions such as what certification is and why it matters, as well as the benefits of being certified. Koichi Tagawa, chairman of the NFC Forum, said &quot;Broadcom has been a driving force behind the evolution of NFC.As we</description>
      </item>
      <item>
         <title>5G WiFi Gets Vote of Confidence  See You at Mobile World Congress</title>
         <link>https://www.broadcom.com/blog/wireless-technology/5g-wifi-gets-vote-of-confidence-see-you-at-mobile-world-congress/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/5g-wifi-gets-vote-of-confidence-see-you-at-mobile-world-congress/</guid>
         <pubDate>February 25, 2012</pubDate>
         <description>It's not often that a company welcomes a competitor to the game - but Qualcomm's recent announcement of an 802.11ac product line is a bit different because it bodes well for the technology industry and consumers looking forward to the next generation of WiFi products. You see, we at Broadcom have been sampling our own 802.11ac - what we're calling 5G WiFi - product family to customers and partners since December.And next week at Mobile World Congress, we'll be showcasing it even further. Being first to market with 802.11ac and least two quarters ahead of our competitors feels like the right position for us.We expect customers to be enjoying the full benefits of 5G WiFi as early as Fall 2012.But it's more than just the 5G WiFi speeds.The reliability will be improved and the backward compatibility with existing WiFi products will make for a seamless upgrade.Our 5G WiFi product line addresses all market and consumer needs for 2012, with solutions for state-of-the-art networking equipment and mobile devices. The demand for the next-generation of WiFi technology is strong - and Qualcomm's entry further validates what we realized some time ago.NPD In-stat indicates that the number of 802.11ac-enabled devices will rise to 1 billion units in 2015.Our 5G WiFi solutions extend our position as the leader in combos and connectivity.In fact, we enable the majority of dual-band Wi-Fi solutions.Our solid track record - 1 billion combo chips to date - is proof of our best-in-class IP, broad and stable software, power efficiency and focus on performance and integration. We consistently have shown the ability to balance key variables - features, power, integration, performance and footprint - in our connectivity solutions.Our 5G WiFi roadmap is no different and theres plenty of excitement about the rapid adoption 5G WiFi and the exciting new possibilities</description>
      </item>
      <item>
         <title>5GWiFi Grows: Belkin Adds to Lineup of Next-Gen Wireless Products</title>
         <link>https://www.broadcom.com/blog/wireless-technology/5gwifi-grows-belkin-adds-to-lineup-of-next-gen-wireless-products/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/5gwifi-grows-belkin-adds-to-lineup-of-next-gen-wireless-products/</guid>
         <pubDate>May 10, 2012</pubDate>
         <description>The lineup of products powered by Broadcom's 5G WiFi technology continues to grow, with Belkin being the latest to adopt the technology for a new portfolio of products. Today, Belkin is announcing the June availability of its line of wireless routers supporting the new 802.11ac wireless networking standard.Broadcom was the first company to release a chip powered by the technology, a more powerful and robust successor to the current 802.11n technology.Broadcom's 5G WiFi chips deliver Ethernet-quality speeds that are three times faster and six times more power efficient than previous generations of Wi-Fi. For consumers, this means that high-definition quality video can easily stream to on multiple devices anywhere on the Wi-Fi network, even a home network that has TVs, game consoles, PCs, smartphones and tablets all tapping into the connection. The 5G WiFi technology is delivering improved wireless range, a broader coverage area, faster connectivity for advanced video streaming and simultaneous connections by multiple devices all while helping preserve device battery life. By supporting 802.11ac technology, Broadcom provides the wireless networking backbone needed to reliably handle the increasing demands that stem from more mobile devices streaming content.Broadcom's continued support of this new standard demonstrates its commitment to enabling the burgeoning ecosystem of the worlds fastest, most reliable wireless connectivity. When Broadcom first unveiled its 5G WiFi chips at CES, the industry was anticipating a strong wireless technology to handle advanced web services such as video chat and conferencing that are starting to hit the scene.Today, the power of 5G WiFi has become a buzz topic in the industry with other companies talking about their plans for 5G WiFi but still no chips or products to unveil. Belkin is one of the first companies to develop products using Broadcom's technology.As a leader in mobile accessories and solutions, Belkin is strongly</description>
      </item>
      <item>
         <title>CTO Henry Samueli in The Hindu Business Line: &quot;The driving force behind the IoT is low-cost wireless connectivity&quot;</title>
         <link>https://www.broadcom.com/blog/cto-henry-samueli-in-the-hindu-business-line-the-driving-force-</link>
         <guid>https://www.broadcom.com/blog/cto-henry-samueli-in-the-hindu-business-line-the-driving-force-</guid>
         <pubDate>October 29, 2014</pubDate>
         <description>Editors Note: Broadcom experts often weigh in on popular topics on industry sites around the Web. Below is a reprint of a story that appeared in The Hindu Business Line, in which Dr. Henry Samueli, chief technical officer and chairman of the board at Broadcom, talks about how the Internet of Things will fuel engineering innovation. From The Hindu Business Line: The Internet of Things (IoT) is shaping up to be one of the more disruptive market opportunities since the creation of the Internet itself, connecting billions of smart devices around the globe and creating billions some say trillions of dollars in product and service opportunities globally. Because of its significance as both an end market as well as a leading source of innovation in the global technology industry, India will exert considerable influence on the development of IoT. The country possesses a huge pool of extremely talented engineers that are eager to innovate and create new products in the IoT space. The next change In one sense, IoT is simply the next logical progression in the consumer electronics industry which leverages the fact that semiconductor and sensor technology inexorably gets smaller, smarter, cheaper and lower-power. Giant mainframe computers gave way to minicomputers, minis to PCs, PCs to laptops, laptops to tablets and smartphones, and now to wearable devices and countless things with wireless connected sensors and processors. The IoT has a unique advantage that will spawn an entire new generation of innovators. Unlike computers, laptops and smartphones which are designed by multinational corporations with teams of hundreds of engineers assigned to each product, IoT devices are dramatically simpler and can be designed by a handful of bright young engineers working in a garage. With hundreds of new ideas for applications and devices for the IoT being incubated daily, it</description>
      </item>
      <item>
         <title>Cable Connections Power Community Wi-Fi at IBC</title>
         <link>https://www.broadcom.com/blog/home-networking/cable-connections-power-community-wi-fi-at-ibc/</link>
         <guid>https://www.broadcom.com/blog/home-networking/cable-connections-power-community-wi-fi-at-ibc/</guid>
         <pubDate>September 7, 2012</pubDate>
         <description>Everybody loves a Wi-Fi hot spot that instant wireless signal that connects most mobile devices to the vast world of online content.But the reality is that connecting to a hot spot is sometimes more trouble than its worth. Sometimes hot spots are crowded with other users, bogging down connection speeds.Other times, logging in can be a drawn-out, intrusive registration process that compromises your privacy and time.Even worse, if a Wi-Fi hot spot connection is less-than-secure, checking your bank balance or making an online purchase could be a risky move. The antidote to many of these Wi-Fi woes is right in your living room the DOCSIS 3.0-enabled cable set-top box or media gateway containing Broadcom's technology.Coupled with specialized software, Broadcom is helping cable operators to offer on the go Wi-Fi hot spots dubbed Community Wi-Fi as a new service to their subscription customers.European cable operators can see the technology in action at the International Broadcasting Convention, the continents leading trade show taking place this week in Amsterdam. How it Works Heres how: Home cable subscribers use a media gateway or an in-home device of some sort that emits a Wi-Fi signal.The most familiar scenario involves connecting devices whether a computer, smartphone, tablet or even a gaming console to that Wi-Fi connection with a passcode. Under this new Community Wi-Fi scenario, a friend who stops by your home and who is also a subscriber of your cable company would be able to instantly tap into the Wi-Fi signal in your home.Heres the plus side: Your friend doesn't need a password to join because its not actually your personal Wi-Fi network that he is using to access the Internet.Instead, its a secondary, open Wi-Fi connection thats being made available to the cable companys customers, via your home connection. And by the way, that</description>
      </item>
      <item>
         <title>Broadcom Wins Market Leadership Award for Second Year in a Row!</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/broadcom-wins-market-leadership-award-for-second-year-in-a-row/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/broadcom-wins-market-leadership-award-for-second-year-in-a-row/</guid>
         <pubDate>February 5, 2012</pubDate>
         <description>Its always nice to be recognized with an award, but it means even more when it's based on feedback from the folks that actually use our products. Each year IT Brand Pulse, an independent market research firm, conducts a survey to determine end-user perceptions of the top industry vendors.They look at overall market leadership, performance, innovation, reliability, support and price. This is incredibly valuable for vendors like us, because there is no other independent survey that provides a comprehensive view of customer perceptions.At the end of the day, there is no one more important than our customers.So, this award is really our report card of sorts. When the results were released this month, we were thrilled to see that Broadcom took the Market Leader award in LAN-on-Motherboard (LOM) for the second straight year, and swept five of the six categories of brand leadership, including market leader, price, performance, reliability and innovation. Interestingly, in a previous IT Brand Pulse survey, they found that a full two-thirds of server administrators know what brand of LOM device is in their servers because it affects the way their network operates.Customers truly care what technologies are under-the-hood running their IT infrastructures. Needless to say, Broadcom is honored to have been selected as the brand leader, particularly by the IT professionals who rely on our technology every day.This award is a tribute to our continued commitment to providing the best LOM solutions on the market.As we look to the next major data center upgrade with Romley/Sandy Bridge servers, we are excited to be working with our OEM partners to bring the highest performance networking solutions to the market. Read more about how Broadcom stacked up in a press release announcing the IT Brand Leader Awards (PDF). So, I would like to thank our employees for their</description>
      </item>
      <item>
         <title>The Road Less Traveled: Server Virtualization and the Data Center</title>
         <link>https://www.broadcom.com/blog/the-road-less-traveled-server-virtualization-and-the-data-center</link>
         <guid>https://www.broadcom.com/blog/the-road-less-traveled-server-virtualization-and-the-data-center</guid>
         <pubDate>May 10, 2012</pubDate>
         <description>Server architecture has come a long way since the days when each server was assigned to a specific application or task.Because many tasks don't play well with others, each task required its own dedicated machine.It was a very basic, albeit not too efficient, approach to architect a data center network. With todays mega data centers and cloud-based services on the rise, computer networks are becoming larger and more complex, rendering the dedicated servers of yesteryear obsolete. As the sheer number of servers continues to increase, one can imagine the amount of physical space required by a data center thats not only overcrowded with racks of servers but also consuming massive amounts of power and generating heat. To move past traditional data center architecture, networking professionals often face a labor-intensive and time-consuming process of reconfiguring workloads.Server virtualization attempts to address both of these issues in one fell swoop while creating more flexibility and greater efficiencies. Utilizing specially designed software, an administrator can convert one physical server into multiple virtual machines.Each virtual server acts like a unique physical device that is capable of running its own operating system (OS).Virtualization presents many advantages, enabling users to consolidate computing hardware resources and allowing them to run multiple virtual machines concurrently on consolidated hardware. Now, IT and data center administrators are leveraging encapsulation and tunneling strategies to address the networking problems created by complex virtual environments and the difficulties created by extending network segments long-distance between data centers.In essence, tunneling fully abstracts the physical network, extending the VLAN construct to offer multi-data center network scalability. Broadcom's NetXtreme II has perfected the art of tunneling.Like most network protocol processing, the bulk of the intelligence is done in the NIC in order to ensure low CPU utilization and maximum performance.By eliminating islands and taking advantage of NIC</description>
      </item>
      <item>
         <title>Interop 2013: Broadcom Helps Enterprises Embrace Advanced Networking</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/interop-2013-broadcom-helps-enterprises-embrace-advanced-networking/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/interop-2013-broadcom-helps-enterprises-embrace-advanced-networking/</guid>
         <pubDate>May 7, 2013</pubDate>
         <description>The Interop show in Vegas is in full swing, and IT professionals of all stripes are seeing the latest in the networking equipment that keeps businesses running. They are also looking to spend money: according to an Infonectics report, enterprises are expected to spend an average of $12.7 million on their network infrastructure this year, a 15 percent increase over last year. Driving these trends is Growth, growth, growth, notes Matthias Machowinski, directing analyst for enterprise networks and video at Infonetics Research: Growth is coming in all forms in the amount of traffic traversing networks, in network capacity, ports, WAN bandwidth, and yes, even networking expenditures lowering the cost of networking is top of mind for network managers, Machowinski said. Many businesses would like the ability to upgrade over time, instead of making one massive investment that may become obsolete faster than they would like. Broadcom's network infrastructure technologies aim to get at the heart of the issue, according to Ed Redmond, vice president and general manager, compute and connectivity, in the Infrastructure and Networking Group at Broadcom.With the latest Broadcom products, customers can deliver higher performance, lower power solutions while leveraging existing investments. To address the growing market of enterprise networking upgraders, Broadcom introduced a slate of new products at Interop this week. The first, the BCM54920 physical layer transceivers, can deliver gigabit speeds while lowering power consumption 40 percent compared to previous generations, thanks to integrated AutoGrEEEn plus technology. Read the full BCM54920 press release here. To better compete in todays fast moving economy, enterprises must be sure their network performance is in tip-top shape, so Broadcom has a new ARM-based low power system on a chip (SoC) that enhances performance and security for small-, medium- and enterprise-sized networks. Read the full BCM58525 press release here. Broadcom's Interop</description>
      </item>
      <item>
         <title>Introducing Tomahawk: New StrataXGS Series Scales Up the Cloud</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/introducing-tomahawk-new-strataxgs-series-scales-up-the-cloud/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/introducing-tomahawk-new-strataxgs-series-scales-up-the-cloud/</guid>
         <pubDate>September 24, 2014</pubDate>
         <description>Not all data centers are created equal and for good reason.Big Internet sites with massive amounts of data, as well as a large flow of users think Google, Facebook and Amazon have different needs than smaller websites. Companies of large magnitude invest in building what the technology industry refers to as massively scalable data centers, which enable the apps or other products they offer over the Internet to be widely deployed with breathtaking speed. For these companies and others, its not enough to make sure that the site always stays on.Those companies need their sites and services to be powered by data centers that must be bolstered continuously to keep up with new demands. Broadcom understands the importance of such demands.For some time now, at the heart of many massively scalable data centers has been Broadcom's StrataXGS Trident and StrataDNX product families, which allow the people operating these networks a way to build the switches they need to manage all of that network traffic. Today, Broadcom is showcasing its data center innovation with the introduction of the new StrataXGS Tomahawk switch family, a next-generation switching solution that offers industry-besting features to future-proof the performance of networks running all those high-demand applications. A single Tomahawk chip, as small as the back of your hand, can switch the equivalent of 1.5 million Netflix streaming movies at the same time. The Tomahawk family cements Broadcom as having the industrys widest, most robust, and most cost-to-performance optimized portfolio of switches for all types of data centers, including massively scalable data centers, public/private clouds, enterprise, and carrier data center applications. A Quick Guide to Internet Traffic To understand why meeting todays data center challenges is so important, its helpful to look at the way Internet traffic flows. Until recently, a lot of what happened online</description>
      </item>
      <item>
         <title>Software Defined Networks Start Here: Webinar with Industry Experts</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/software-defined-networks-start-here-webinar-with-industry-experts/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/software-defined-networks-start-here-webinar-with-industry-experts/</guid>
         <pubDate>January 20, 2016</pubDate>
         <description>Software Defined Networking (SDN) is among the biggest trends for 2016, with carriers and enterprises testing pilot programs ahead of a bigger ramp-up that's expected over the next five years. SDN -- where software takes center stage in managing data center traffic flows -- promises a new era of efficiency, improved network performance, increased security and faster time-to-market for networking services. New York-based market researcher Reportlinker predicts that the global market for SDN will reach nearly $12 billion by 2020, according to a recent report. Broadcom's software defined networking (SDN) technologies, along with open-source initiatives from key industry leaders, are working to make networks more flexible, programmable and scalable. One such initiative is called the Central Office Reimagined as a Datacenter, or CORD Project, which aims to unify the underlying common infrastructure to deliver data center economies of scale and cloud-style agility to service-provider networks, according to the Open Networking Lab (ON.Lab), a nonprofit, open source software-defined network (SDN) tool development ecosystem out of Stanford University and the University of California, Berkeley. CORD is a SDN-based leaf-spine fabric built with bare-metal Open Compute Project hardware (using Broadcom switching silicon) and open source switch software, built on top of the OpenFlow-Data Plane Abstraction (Broadcom's open source SDN reference software). All of these benefits, taken together, enable operators of mega-scale data centers (such as case-study participant AT&amp;T) to reduce their capital and operational expenses and roll out key performance-enhancing features. In the webinar details outlined below, experts from Broadcom, AT&amp;T, ON.Lab and the Open Networking Foundation present this real-world SDN use case, which is now being implemented by one of the worlds largest service providers.

Register for the free Webinar and tune in on January 28 at 10:00 am PST.


</description>
      </item>
      <item>
         <title>The Power of Connections [Video]</title>
         <link>https://www.broadcom.com/blog/wireless-technology/the-power-of-connections-video/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/the-power-of-connections-video/</guid>
         <pubDate>January 11, 2012</pubDate>
         <description>We live in an ever-changing, connected world, and the proliferation of consumer electronics is at the heart of it all.
In 2012, there will be more devices than people on Earth.


Today, Broadcom is driving more innovation and powering more connected devices than ever before.Every day, 99.98 percent of the world's data touches a Broadcom chip.

Explore the power of connections and learn amazing facts about the growth of our markets and how Broadcom innovations are changing the world.



Video highlights:

	 There are more than 4 billion smartphones in use on the planet
	Smartphone sales are growing faster than mobile phones
	The average consumer is never more than 3 feet from their smart phone and checks it 40 times per day
	Half of all Facebook users gain access via a mobile device and are twice as active as non-mobile users
	Hospitals that use Web-based health records have a 5% lower mortality rate
	Video represents 50% of all mobile traffic, is expected to grow to 90%
	Near-Field Communications (NFC) enabled devices are projected to reach 650 million by 2015
</description>
      </item>
      <item>
         <title>5GWiFi, NetXtreme Ethernet win top awards at Interop</title>
         <link>https://www.broadcom.com/blog/wireless-technology/5gwifi-netxtreme-ethernet-win-top-awards-at-interop/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/5gwifi-netxtreme-ethernet-win-top-awards-at-interop/</guid>
         <pubDate>May 9, 2012</pubDate>
         <description>Broadcom's technologies were honored this week as being some of the best at the Interop conference in Las Vegas.Specifically, Broadcom's family of 802.11ac - or 5GWiFi - chips was named Best New Product at the 2012 Network Products Guide Hot Companies and Best Products Award.And Broadcom's NetXtreme Ethernet Solutions was named winner of the IT Products and Services for Enterprise (Medium) category. The awards are an honor for Broadcom as it continues to see momentum around these innovative technologies. Broadcom was the first to introduce chips using the new 802.11ac wireless technology and has created some early excitement around the concept of 5GWiFi.Already, companies are rolling out products that will use the new technology - Netgear being the first - and others are excited about the potential that it will offer.The technology promises a more robust and more seamless experience around digital content, such as video, and the capabilities of sharing that content across a number of devices, including televisions, mobile phones, tablet computers and traditional PCs. The NetXtreme Ethernet Solutions were also recognized for the innovation they bring to enterprise networks and data centers.Broadcom's Ethernet adapter offerings are engineered to help IT managers meet new demands, such as the increasing adoption of virtualization, as well as leverage advanced features, such as Energy Efficient Ethernet, which enables up to 42 percent less power consumption and lower IT operating costs. Broadcom's diverse portfolio of semiconductor solutions were also name finalists in the 2012 Network Products Guide Awards in eight categories, including Access, Best IT Hardware for the Enterprise, IT Products for the Enterprise, Services for Telecom, Data Center, Green IT, Mobile &amp; Wireless, and Best New Product. Full Coverage: Broadcom at Interop 2012 Broadcom at Interop: Power Consumption Technology Plays Important Role Broadcom at Interop: Energy Efficient Ethernet is Good for</description>
      </item>
      <item>
         <title>NFC: More than Just Mobile Payments</title>
         <link>https://www.broadcom.com/blog/wireless-technology/nfc-more-than-just-mobile-payments/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/nfc-more-than-just-mobile-payments/</guid>
         <pubDate>August 8, 2013</pubDate>
         <description>Are you one of those consumers who taps a smartphone against that special payment terminal at the checkout counter, instantly paying for those new shoes or the cart full of groceries? No? Youre not alone. Mobile payments is an idea thats been talked about and tested for some time now but just hasnt generated the traction that might be expected, especially given the infatuation that todays consumers have with their beloved smartphones.But now, a technology thats behind mobile payments is starting to make its way into the hands of consumers, via a new wave of smartphones, including Google-powered Android phones that run Broadcom's NFC software stack. The technology, called Near Field Communication, or NFC, powers a secure and temporary short-range connection between two devices through a simple tap.In the video below, Richard Ybarra, wireless LAN product manager in the Mobile &amp; Wireless Group at Broadcom, demonstrates how mobile payments work with Near Field Communication technology.Using a Google Nexus tablet and smartphone, Ybarra shows how easy it is to pay for an item by tapping together the two mobile devices. It may be convenient and easy to use but consumers can be fickle when it comes to new ideas, especially when it involves a new use of the device (tapping) to access something so precious to them (their bank accounts and credit cards).Thats why its important to note that these tap-and-go data transfers enabled by NFC technology dont have to involve financial payments. Indeed, before NFC can take off as a mainstream payment method, consumers may need to establish their comfort levels by using NFC in other contexts, touching their devices to other electronics for other purposes, said Mohamed Awad, director of product marketing in the Mobile &amp; Wireless Group at Broadcom and vice chair for the NFC Forum. In a</description>
      </item>
      <item>
         <title>RoboSwitch-2 accelerates proliferation of TSN Ethernet for Industry 4.0</title>
         <link>https://www.broadcom.com/blog/roboswitch-2-accelerates-proliferation-of-tsn-ethernet-for-industry-4-0</link>
         <guid>https://www.broadcom.com/blog/roboswitch-2-accelerates-proliferation-of-tsn-ethernet-for-industry-4-0</guid>
         <pubDate>November 10, 2017</pubDate>
         <description>The Industrial Automation market (IA) is currently undergoing its fourth evolutional wave, one that scholars are calling Industry 4.0. This process started in the early 19th century with the original industrial revolution, and is now seeing technologies associated with the “Internet of Things” (IoT) adapting to the industrial eco-system, introducing a new term – the “Industrial IoT” or “IIoT” – into our collective, technological lexicon. Figure 1: The 4th Industrial revolution--Industry 4.0 High bandwidth connectivity and seamless interoperability between all nodes in the eco-system (key attributes of Ethernet) are critical components to ensuring Industry 4.0 meets its full potential. These attributes have been the driving force behind the adoption of Ethernet in the Industrial Automation market, which has already shifted from fieldbus protocols such as Profibus, Sercos I and II, Bitbus, Modbus and others to Ethernet-based protocols: Profinet, Sercos III, EtherNet/IP and Ethercat. However, Ethernet as we use it today (IEEE 802.3 and parts of IEEE 802.1) possesses a key limitation that prevents it from becoming the sole protocol for Industrial applications: Traditional Ethernet is not deterministic and does not provide real-time control. Ethernet cannot guarantee traffic latency, jitter (delay variation) or easily reserve bandwidth along a network path for specific traffic flows. Key industrial applications require guaranteed minimal latency and jitter. Too often, due to lack of determinism, Ethernet is deployed either alongside legacy bus protocols or with an overlay of proprietary protocols that complement it. For example, motion control applications use Profinet IRT (Isochronous Real Time) as an overlay to Ethernet to guarantee sub-1mSec latency. Figure 2: Ethernet industrial automation manufacturing and overlay industry protocols Determinism comes to Ethernet Time Sensitive Networking (TSN), a new set of Ethernet standards, was developed to complement the basic characteristics of Ethernet and make it deterministic. It includes industry known time-synchronization protocols,</description>
      </item>
      <item>
         <title>Connected Home Technologies: See the Enhanced In-Home Experience at CES</title>
         <link>https://www.broadcom.com/blog/connected-home-technologies-see-the-enhanced-in-home-experience</link>
         <guid>https://www.broadcom.com/blog/connected-home-technologies-see-the-enhanced-in-home-experience</guid>
         <pubDate>December 6, 2012</pubDate>
         <description>[caption id=&quot;attachment_5741&quot; align=&quot;alignright&quot; width=&quot;314&quot;] A look at how DLNA, Wi-Fi, MoCa and HomePlug work together among multiple devices within a home.[/caption] The ultimate connected home is heading to Las Vegas for the big Consumer Electronics Show next month and the top home networking alliances are getting ready for more than just demos of their technologies.This year, the focus is on how they work together to bring home connectivity to the next level. Those networking groups' technology standards Digital Living Network Alliance (DLNA), HomePlug, Multimedia over Coax (MoCA) and Wi-Fi are already working in todays home.Multi-room DVR the ability to record a TV show on the living room set-top box and watch it from the connected bedroom TV is just one example.Their roles in the larger network, how they support each other to provide whole-home coverage, are whats important to the enhanced experience.With the backing of proven interoperability certification programs, these technologies work together in such a seamless way that most people dont even think twice about whats powering these experiences. Read the press release. Broadcom is the only chip company to offer all of these home networking standards.HomePlugs Powerline networking technology allows users to extend the coverage of their home networks so it reaches more places, all via an adapter than plugs into an ordinary electrical outlet.MoCA technology provides the high throughput networking while DLNA allows for sharing of all of your videos, music and photos across all of your devices.And, of course, theres Wi-Fi, the standard that tablets and smartphones in many homes are tapping into for wireless connectivity. If youre planning to attend CES, swing by the Connectivity Alliances TechZone to see how these networking technologies are changing the in-home experience.The demo area will be located in South Hall 1 at Booth No.20300.If youre not attending the show,</description>
      </item>
      <item>
         <title>Broadcom at Interop: Unprecedented Innovation for Next Gen Data Center, Green IT and Enterprise 2.0</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/httpwww-broadcom-comcompanyeventsinterop12-php/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/httpwww-broadcom-comcompanyeventsinterop12-php/</guid>
         <pubDate>May 8, 2012</pubDate>
         <description>Were excited to be in Las Vegas this week for the Interop show, where well be showcasing the full line-up of our latest innovations enabling the next generation Data Center, Green IT and Enterprise 2.0.Visitors to our booth on the main show floor will get an up close and personal demonstration of our amazing technologies in action. Terabit Anyone? In the Data Center area of our booth you will find our new 100GbE BCM88650 switch and BCM87550 fabric, enabling terabit connectivity from the edge to the core of the network.Also on display will be our latest PHYs: our 10GBASE-T PHYs, providing up to 50 percent lower power consumption and our 100GbE Gearbox PHYs, the industrys first to support 10GbE, 40GbE and 100GbE line interfaces. Need to keep it green? Our newest additions to our ever-expanding Energy Efficient Ethernet (EEE) portfolio offer power reduction of up to 70 percent per port. For those curious to see how the NetLogic portfolio fits into our overall solution post-acquisition, the complete portfolio of Broadcom knowledge based and multi-core processors will be on display. Weary IT and network managers looking to make their lives just a bit easier should check out our Enterprise 2.0 innovations, including the recently announced BCM56545 series with App-IQ technology, which provides network managers with the tools they need to analyze and regulate web application traffic to ensure employee productivity. And if all of that werent enough to keep us hopping, were thrilled to be a finalist in the Best of Interop awards for our 5G WiFi 802.11ac solution, providing on-the-go gigabit technology for the mobile workforce. We look forward to seeing you in our booth at Interop Las Vegas! In the interim, you can find our full lineup of news from the show by following us on Twitter or visiting</description>
      </item>
      <item>
         <title>Enterprise 2.0: Broadcom puts Network Managers in the Fast Lane</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/enterprise-2-0-broadcom-puts-network-managers-in-the-fast-lane/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/enterprise-2-0-broadcom-puts-network-managers-in-the-fast-lane/</guid>
         <pubDate>May 2, 2012</pubDate>
         <description>As employees around the world replace their desktops and laptops with mobile devices, network managers are seeking new ways to provision, secure and control enterprise computing resources and information access to keep employees connected  regardless of their location.

This week, we unveiled our latest innovation designed for the modern workforce (aka Enterprise 2.0). The BCM56545 brings the innovation Broadcom is known for in the data center and service provider networks to the wiring closet, enabling gigabit connectivity in the enterprise.

One very cool feature of this new solution is our new App-IQ technology.It provides network managers with application-level visibility, meaning they can more effectively manage congestion,block undesired applications and ultimately increase employee productivity.Until now, only specialized and expensive appliances could deliver this level of application visibility.

So what does this mean for the common road warrior doing business on the go? It means faster connections, effortless streaming video and easy access to cloud based applications and data.In short, it means a better overall experience when connecting with the corporate network.

Come by our Interop booth next week to see the technology in action or visit our website to learn more.

Full Coverage: Broadcom at Interop 2012

	Broadcom at Interop: Power Consumption Technology Plays Important Role
	Broadcom at Interop: Energy Efficient Ethernet is Good for the Planet
	Technology Moving at the Speed of Life: Broadcom Enables Massive Network Scalability
	Broadcom at Interop: Next-Generation Data Centers Shift into High Gear

 </description>
      </item>
      <item>
         <title>OFC in Anaheim: Optical Transport in Focus</title>
         <link>https://www.broadcom.com/blog/network-infrastructure/ofc-in-anaheim-optical-transport-in-focus/</link>
         <guid>https://www.broadcom.com/blog/network-infrastructure/ofc-in-anaheim-optical-transport-in-focus/</guid>
         <pubDate>March 18, 2013</pubDate>
         <description>When all of our communications were dominated by voice, circuit-based connections ruled the day for optical network deployments.Long-duration connections would exist between multiple points in a network, and information would flow between those points.Even if there was no information moving the silence during a phone call, for example those connections still existed and reserved their required bandwidth. The world is different today.The rise of the Internet as the backbone of modern-day communications means that the devices that carry these transmissions from personal computers to connected mobile devices rely on packet-based data, the information that is sent at random intervals depending on the nature of the traffic itself.Emails, for example, are information packets that only use bandwidth as they are transmitted. As such, optical networks are continuing to evolve so that they can handle not only the increased bursts of traffic brought about by millions of new data-hungry consumers, but also the new types of end-user devices that are being integrated into enterprise networks. The challenge for network operators is to implement an optical network design that offers a cost-effective balance between legacy circuit-based connections and more modern packet-based technologies.Optical Transport Networks (OTN), with flagship 100Gb technologies, are promising to meet this challenge. OTNs are poised for growth in the coming years and Broadcom's ready with the industrys broadest end-to-end portfolio of PHYs and switches that aim to help carriers lower costs and maximize the lifespan of their networks. Related: Next-Generation Data Centers Shift into High Gear Broadcom is at the Optical Fiber Technical Conference (OFC/ NFOEC) in Anaheim this week talking about our latest innovations in optical networking that are paving the way for 100G long-haul networks.The common thread is that Broadcom is enabling next-gen optical transport with while keeping power usage at a minimum. Here's a look at the</description>
      </item>
      <item>
         <title>Cloud Strain? Broadcom to Demonstrate Cloud-Scale Network Architecture Tools and Technologies at Interop 2014</title>
         <link>https://www.broadcom.com/blog/cloud-strain-broadcom-to-demonstrate-cloud-scale-network-archit</link>
         <guid>https://www.broadcom.com/blog/cloud-strain-broadcom-to-demonstrate-cloud-scale-network-archit</guid>
         <pubDate>March 28, 2014</pubDate>
         <description>When it comes to the cloud the Internets invisible storage cabinet where streaming music originates, uploaded photos are shared or mobile games are stored consumers dont need to actually see it to know that its there. The same goes for the clouds underlying backbone thats made up a millions of moving parts think servers, switches, routers and cables across thousands of data centers.In that sense, the cloud definitely isn't some weightless file cabinet floating around in the atmosphere.Its real hardware, powered by real technology. And as we access larger amounts of data from a growing lineup of devices tablets, wearables and more the demands can create a strain on the physical infrastructure itself. Increasingly, network administrators and IT professionals are constantly performing a balancing act of scaling up to meet the growing needs while also managing costs and power constraints. Cloud Configuration At the Interop trade show in Las Vegas next week, Broadcom will demonstrate a suite of technologies for Cloud Scale Networking. The focus is focusing on giving choices to IT professionals tasked with identifying network switching and interconnect technologies that can best help maximize cloud infrastructure efficiency and usage. Cloud Scale Networking and the many choices network operators must make when building new data centers is at the core of a discussion at Interop called, A Blueprint for Scalable Data Center Fabrics: Leveraging Ethernet for Cloud-Scale Performance, Economics, and Workload Agility. The session will be an interactive dialogue between Facebook executive Najam Ahmad, director of technical operations, and Ariel Hendel, senior technical director of switch architecture in Broadcom's Infrastructure &amp; Networking Group. Changes Ahead IT managers are facing tremendous change in their industry as the movement toward more open, shared networking architectures takes root. Broadcom, which strives to create products that help shape the network as it adapts</description>
      </item>
      <item>
         <title>Broadcom Powers Up with Wireless Charging for Smartphones</title>
         <link>https://www.broadcom.com/blog/wireless-technology/broadcom-powers-up-with-wireless-charging-for-smartphones/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/broadcom-powers-up-with-wireless-charging-for-smartphones/</guid>
         <pubDate>May 28, 2014</pubDate>
         <description>Show me a gadget-lover and Ill show you his junk drawer: Its likely overflowing with a tangled collection of wall chargers in all shapes and sizes. But that could be changing soon.The holy grail of true wireless battery charging when device-toting consumers can unleash themselves from wall outlets, ditch all their disparate cables and charge their battery-run devices in the same spot is just around the corner. For consumers and device-makers alike to get on board, industry experts say that above all else, wireless charging needs to have a seamless, nearly-effortless user experience something thats akin to the daily pocket purge when walking through the door. The average household has more than 10 different devices to charge, according to ABI Research.And that figure is set to rise as the wearables trend takes off and invites battery-powered connected fitness trackers, watches and more into homes. At my house, Im the guy who has to figure out which charger goes with which device, said Reinier van der Lee, director of product marketing, mobile platforms at Broadcom.Theres a real market need for a universal charging solution, and it should be wireless. Broadcom first started talking about wireless charging at Januarys Consumer Electronics Show, when the company demod some early reference designs with the Alliance for Wireless Power (A4WP), an industry trade group. Today, Broadcom unveiled an end-to-end wireless charging chipset that can usher in a new (cable-free) era where convenience is king. The idea is to drop-and-go so you dont have to think about it, van der Lee said.There is a convenience factor that is not to be underestimated. That convenience factor is at the heart of Broadcom's design, which enables multiple devices with different power requirements (say, a tablet, smartphone and smartwatch) to charge at the same time, without worrying where they</description>
      </item>
      <item>
         <title>MWC 2014: NFC Heads for More Devices, Broadcom Brings Costs Down</title>
         <link>https://www.broadcom.com/blog/wireless-technology/mwc-2014-nfc-heads-for-more-devices-broadcom-brings-costs-down/</link>
         <guid>https://www.broadcom.com/blog/wireless-technology/mwc-2014-nfc-heads-for-more-devices-broadcom-brings-costs-down/</guid>
         <pubDate>February 23, 2014</pubDate>
         <description>For years now, theres been buzz about the benefits of a mobile technology called Near Field Communication, or NFC.The short-range communications protocol enables a connection between two devices smartphone to headset or tablet to TV, for example through a simple tap.But like many emerging technologies, it sometimes takes a milestone at the component level to fast-forward adoption. This week, at the Mobile World Congress show in Barcelona, Broadcom is introducing a next-generation portfolio of NFC controllers that will unleash the benefits of NFC for consumers in more devices, including NFCs integration into smartphones and additional small form-factor devices. NFC is predicted to show up in a variety of platforms this year, including the tablets, TVs and peripherals, as well as some less-expected devices, such as home appliances, cameras and speakers, according to ABI Research.One of the most talked about benefits of NFC is the ability to make tap-and-go payments using a mobile device, using smartphone apps to connect with cash registers, parking meters, public transportation systems and other types of payment set-ups.Because Broadcom's BCM20795 reduces the antenna size by 50 percent, it not only reduces the costs of incorporating the tap technology into devices but also broadens the range of devices that it can reach. The new NFC portfolio is now compatible with Broadcom's WICED (Wireless Internet Connectivity for Embedded Devices) platform, which is enabling a new category of devices called Wearables (think: fitness trackers, smartwatches, and the like) to connect to the Internet, the cloud, or an app.That means that NFCs tap-to-connect capabilities will extend to a broader range of connected devices than ever before. With mobile payment adoption building momentum and consumers using smartphones and tablets more frequently to tap and share information, it is critical for NFC to be available on affordable devices, said Rahul Patel, vice</description>
      </item>
      <item>
         <title>Lewis Brewster in Wireless Week: &quot;Access to Unlicensed Spectrum Has Been the True Catalyst for Innovation&quot;</title>
         <link>https://www.broadcom.com/blog/lewis-brewster-in-wireless-week-access-to-unlicensed-spectrum-h</link>
         <guid>https://www.broadcom.com/blog/lewis-brewster-in-wireless-week-access-to-unlicensed-spectrum-h</guid>
         <pubDate>August 6, 2015</pubDate>
         <description>Editors Note: Broadcom experts often weigh in on popular topics on industry sites around the Web. Below is a reprint of a story that appeared in Wireless Week in which Lewis Brewster, Vice President and General Manager of Wireless Connectivity at Broadcom, talks about the importance of unlicensed spectrum on the future of communications technology. From Wireless Week: Todays consumers have come to rely on both licensed cellular and unlicensed Wi-Fi spectrum to access data and content anywhere. While theres no doubt that licensed cellular spectrum has been instrumental in the widespread adoption of mobile wireless technologies, access to unlicensed spectrum has more recently been the true catalyst for innovation. As an example, one need look no further than Wi-Fi, one of the most successful unlicensed technologies with roughly 10 billion Wi-Fi enabled devices shipped worldwide to date and an additional five billion expected just four years from now. Consumers with smartphones and tablets rely on Wi-Fis typically faster, more reliable connections at lower cost than cellular access. Consumers also depend on Wi-Fi to enable real-time video and connected services in the smart home. To keep pace with consumers' ever increasing appetite for streaming content, real-time video and 24/7 connectivity, operators are constantly looking into more efficient ways to use existing licensed spectrum and some are now looking to expand LTE-like service into unlicensed spectrum to increase capacity. This is a critical juncture for unlicensed spectrum in which friendly co-existence between new technology and existing Wi-Fi deployments is paramount. There are two LTE technologies designed for use in unlicensed spectrum currently under consideration by operators: the non-standard LTE-Unlicensed (LTE-U) protocol and the License Assisted Access (LAA) standard currently under development. LTE-U is a pre-standard, proprietary technology proposed by the LTE-U Forum. LAA is a formal standard being spearheaded by the</description>
      </item>
      <item>
         <title>Word on the Street: Media roundup for Tomahawk® II</title>
         <link>https://www.broadcom.com/blog/word-on-the-street-media-roundup-for-tomahawk-ii</link>
         <guid>https://www.broadcom.com/blog/word-on-the-street-media-roundup-for-tomahawk-ii</guid>
         <pubDate>January 31, 2017</pubDate>
         <description>What writers and editors from the industry press are saying about Broadcom’s Tomahawk® II 64 port 100GE Ethernet switch From Rick Merritt at EE Times: “Broadcom is sampling a chip for data center switches that supports 64 100-Gigabit Ethernet links, beating rivals Cavium and two startups to the punch. The 16nm Tomahawk II gives Broadcom a beachhead at a time when the sector is driving toward software-defined networks. “The chip represents a new high end for Broadcom’s StrataXGS family. It integrates 256 serdes running at more than 25 Gbits/second for 6.4 Tbits/s aggregate bandwidth that also can be configured as 128 50GE ports.” From Charlie Demerjian at SemiAccurate: “Broadcom upped the ante to 64 100Gbps ports with the new Tomahawk 2 switch silicon. “Better yet the Tomahawk family which supplants the Trident family before it is also built on the same Broadcom SDK so code that doesn’t carry over directly should be an easy port between generations.” Deterministic latency. “Broadcom claims sub-400ns latencies when traversing the chip, not a bad number to start off with. Better yet you get the same bandwidth no matter how many features you apply to one of these flows. You may have noticed that we didn’t claim that latencies would stay the same when going from packet switching to turning on all the advanced features, they will obviously go up. What won’t go up, or in this case won’t go down is the bandwidth that 6.4Tbps number is claimed to be unchanged with all or no features applied. To us anyway, this is an impressive accomplishment.” Ecosystem. “As we mentioned before you can have up to 128 ports of 40/50Gbps or 64 100Gbps ports all off of one chip. The port count can collapse datacenter tiers which drop cost, complexity, and latency, things which are</description>
      </item>
      <item>
         <title>New DC-50MBd SFP transceiver cuts power consumption by half and doubles fiber density for HVDC VBE system</title>
         <link>https://www.broadcom.com/blog/new-dc-50mbd-sfp-transceiver-cuts-power-consumption-by-half-and-doubles-fiber-density-for-hvdc-vbe-system</link>
         <guid>https://www.broadcom.com/blog/new-dc-50mbd-sfp-transceiver-cuts-power-consumption-by-half-and-doubles-fiber-density-for-hvdc-vbe-system</guid>
         <pubDate>August 25, 2017</pubDate>
         <description>Fiber optic components have been key components for data transfer and control links between equipment in power generation and distribution markets. In this market -- where equipment often operates at high voltage and current – the environment makes fiber optic cable the de-facto standard as a communication medium. Why? Because fiber optic components have the intrinsic property of high immunity against electromagnetic interference on top of high galvanic isolation. This is especially true when the control signal is transmitted within an HVDC system that operates at several hundred kilovolts. Fiber optic components are used to send control and feedback signals between Valve Base Electronics (VBE) and the thyristor valves in the HVDC system (See Figure 1). The valves, which consist of thyristors, are hung from the ceiling. Due to its extremely high voltage, the valves are usually remotely controlled by a VBE placed at another building through fiber optic links. As there are a lot of valves to be controlled in a single HVDC system, the VBE system requires small size and low power consumption fiber optic components that allow higher port density. Broadcom‘s Versatile Link product family has been used to send control and feedback signals between the TCU (Thyristor Control Unit) and thyristor (SCR) module in a valve tower. Broadcom is a leading provider of fiber optic solutions for HVDC applications. Our Versatile Link products have been used extensively for short-link fiber optic communications between the thyristor control unit (TCU) and thyristor module (SCR) in a thyristor valve. Expanding upon Broadcom’s Miniature Link products like the HFBR-1414TZ and HFBR-2418TZ, which have been used to provide long-link communications between the VBE and TCU via multimode fiber cable, the latest AFBR-57B4APZ solution offers significantly lower power consumption and doubles the fiber density per board. As the modern HVDC technology transmits</description>
      </item>
      <item>
         <title>Optocouplers for insulation resistance measurement</title>
         <link>https://www.broadcom.com/blog/optocouplers-for-insulation-resistance-measurement</link>
         <guid>https://www.broadcom.com/blog/optocouplers-for-insulation-resistance-measurement</guid>
         <pubDate>November 8, 2017</pubDate>
         <description>In high-voltage applications like industrial motor, solar energy generation, and electric vehicle (EV) battery management systems (BMS), the measurement of insulation resistance is critical to determine if the system can be put in operation without serious safety concerns. Under high-voltage conditions, failure of insulation will cause high leakage current, damaging the system and injuring the user. Insulation resistance will need to be measured periodically, as it will degrade over time by high-voltage electric and thermal stress. As such, it is important that the system is installed with an insulation resistance measurement function to monitor any degradation and take actions before failure occurs. This article will discuss how optocouplers can provide high-voltage isolation and complete the insulation resistance measurement function in the system. Principles of operation To determine the insulation resistance, a direct voltage of more than 500V is first applied to the system. For an industrial motor, IEEE Std 43-200 provides the guidelines of the DC voltage as shown in Table 1. Winding Rated Voltage (V) Insulation Resistance Test Applied Voltage (V) &lt;1000 500 1000-2500 500-1000 2501-5000 1000-2500 Table 1. Guidelines for DC voltages to be applied during insulation resistance test The voltage, VSENSE is then sensed across a shunt, RSHUNT to determine the leakage current, ILEAKAGE. The insulation resistance, RISO of the motor windings to the frame can then be calculated by Ohm’s Law as shown in Figure 1. RSEL resistor network is used to select and cover the entire range of the insulation resistance. Using the same IEEE standard as reference, a general rule-of-thumb for good insulation resistance is above 10Mohm. Insulation Resistance (MΩ) Insulation Level &lt;10 Abnormal 10-50 Good 50-100 Very Good &gt;100 Excellent Table 2. Recommended insulation resistance value Figure 1. Block diagram of insulation resistance measurement circuit For solar energy generation, DIN EN 61646; DIN</description>
      </item>
      <item>
         <title>Unmatched RF innovation and performance</title>
         <link>https://www.broadcom.com/blog/unmatched-rf-innovation-and-performance</link>
         <guid>https://www.broadcom.com/blog/unmatched-rf-innovation-and-performance</guid>
         <pubDate>February 16, 2018</pubDate>
         <description>This is part of a series examining Broadcom's role innovating the past decade of wireless technology. The evolution of the smartphone has been largely driven by the progression of cellular technology, from 2G to 3G to present-day 4G (LTE). With each successive generation of cellular networks, service providers have used various ways to increase network capacity to deal with the rise in mobile data traffic. Adding more frequency bands to address new user demands, implementing carrier aggregation (CA) to support higher data rates, and adopting MIMO to enable higher spectral efficiency have all impacted the smartphone RF front end design. RF content in the smartphone grew substantially as a result of these new requirements and expanded functionality. From 2007-2010, RF content was mostly comprised of discrete components like FBAR filters, GaAs power amplifiers (PAs), and pHEMT low noise amplifiers (LNAs). Around 2011, LNA-filter front end modules (FEMs) and multi-band power amplifier modules (PAMs) began to appear in smartphones, replacing discrete components to reduce RF system BOM. RF module design complexity has grown dramatically since 2014 due to increasing band count and CA requirements. Hence, a series of highly integrated FEM+PA solutions have been developed and adopted in high-end smartphones. Compared to some of the high-end smartphones released in 2014 which were only capable of supporting up to 20 bands with 32 CA combinations, modern flagship smartphones are designed to handle up to 30 bands with more than than 200 CA combinations. As the mobile bandwidth requirement per smartphone continues to increase, so does the complexity of the RF front end, making it more challenging for system engineers to address certain critical RF requirements such as linearity, isolation and efficiency. Broadcom is the industry leader in providing advanced RF front end solutions for smartphones. Leveraging its unique RF portfolio with in-house</description>
      </item>
      <item>
         <title>Word on the Street: Media roundup for Broadcom's BCM47755 dual frequency GNSS receiver</title>
         <link>https://www.broadcom.com/blog/word-on-the-street-media-roundup-for-broadcoms-bcm47755-dual-frequency-gnss-receiver</link>
         <guid>https://www.broadcom.com/blog/word-on-the-street-media-roundup-for-broadcoms-bcm47755-dual-frequency-gnss-receiver</guid>
         <pubDate>October 19, 2017</pubDate>
         <description>From Samuel K. Moore in IEEE Spectrum: “At the ION GNSS+ conference in Portland, Ore., today Broadcom announced that it is sampling the first mass-market chip that can take advantage of a new breed of global navigation satellite signals and will give the next generation of smartphones 30-centimeter accuracy instead of today’s 5 meters. Even better, the chip works in a city’s concrete canyons, and it consumes half the power of today’s generation of chips. The chip, the BCM47755, has been included in the design of some smartphones slated for release in 2018, but Broadcom would not reveal which. GPS and other global navigation satellite systems (GNSSs), such as Europe’s Galileo, Japan’s QZSS, and Russia’s Glonass, allow a receiver to determine its position by calculating its distance from three or more satellites. All GNSS satellites—even the oldest generation still in use—broadcast a message called the L1 signal, which includes the satellite’s location, the time, and an identifying signature pattern. A newer generation broadcasts a more complex signal called L5 at a different frequency in addition to the legacy L1 signal. The receiver essentially uses these signals to fix its distance from each satellite based on how long it takes the signal to go from satellite to receiver. Broadcom’s receiver first locks onto the satellite with the L1 signal and then refines its calculated position with L5. The latter is superior, especially in cities, because it is much less prone to distortions from multipath reflections than L1.” From Tony Murfin in GPS World: “So, what does taking positioning from 5 meters with L1-only (Broadcom 4774 chip) to 30 centimeters with L1/L5 (BroadcomBCM47755 new chip) do for OEM manufacturers? Without external sensor aiding, you can get lane-departure warnings for cars; with more satellite visibility, it enables much-improved down-town navigation. Probably the biggest</description>
      </item>
      <item>
         <title>Pioneering leadership in GNSS</title>
         <link>https://www.broadcom.com/blog/pioneering-leadership-in-gnss</link>
         <guid>https://www.broadcom.com/blog/pioneering-leadership-in-gnss</guid>
         <pubDate>February 9, 2018</pubDate>
         <description>This is part of a series examining Broadcom's role innovating the past decade of wireless technology. Although it is often taken for granted, GNSS (or GPS) is fundamentally important to many smartphone applications, not just navigation. By enabling location services in the smartphone settings, a user can map out a detailed road trip with intermediate stops along the route, geotag photos or videos captured at different places, locate and keep track of family members while on vacation, or play a mobile augmented reality (AR) game like Pokémon Go while strolling through the park. The satellite navigation field has steadily improved over the last 10 years. More GNSS signals have been made publicly available by international satellite constellations, such as GLONASS from Russia, QZSS from Japan, Beidou from China, and Galileo from Europe. Multi-constellation GNSS improves navigational accuracy and yield because of the greater number of GNSS signals available for computation. In addition to the legacy GPS L1 signal, the U.S. has expanded the availability of the GPS L5 signal, a modernized signal that enables sub-meter accuracy. In parallel, European and Japanese constellations are also broadcasting their GPS L1/L5 equivalent dual-frequency signals. Presently, there are a sufficient number of dual-frequency GNSS satellites in orbit to support high-precision, location-based applications such as lane-level vehicle navigation and efficient e-hailing services. Having lane-level knowledge of the vehicle’s location vastly improves turn-by-turn navigation performance and enhances crowd-sourced traffic information of apps like Waze and INRIX. Being able to pinpoint the location of both the driver and rider with sub-meter accuracy in an e-hailing application like Uber or Lyft provides a much better estimate of arrival times. Since 2008, Broadcom has been delivering leading-edge GNSS receivers, leveraging the expanded availability of satellite signals for enhanced global positioning. Broadcom has shipped more than 1 billion GNSS chips</description>
      </item>
      <item>
         <title>Word on the Street: Media roundup for Tomahawk® 3</title>
         <link>https://www.broadcom.com/blog/word-on-the-street-media-roundup-for-tomahawk-3</link>
         <guid>https://www.broadcom.com/blog/word-on-the-street-media-roundup-for-tomahawk-3</guid>
         <pubDate>February 27, 2018</pubDate>
         <description>From Rick Merritt at EE Times: “Tomahawk-3 packs a Gbit of memory, the most of any Broadcom switch to date. Engineers designed a new block for the chip enabling a shared buffer for RDMA-aware traffic scheduling, a key to the chip’s performance. &quot;The chip is 'a major achievement,’ said Bob Wheeler, a senior analyst with The Linley Group told EE Times. ‘The new Broadcom has been surprisingly aggressive — it has actually increased its high-end product cadence. They are investing in three different switch architectures — Tomahawk, Trident, and Jericho — leaving few openings for competitors,' Wheeler said.&quot; From Craig Matsumoto at Light Reading: “Going beyond straight bandwidth, Broadcom says hyperscale players want to adapt their networks to trends such as deep learning and storage disaggregation. Both are driving the need for lots of connections in the network, but operators don't want this to lead to lots of hops between switches; they'd prefer for each switch to fan out to as many devices as possible. That's where sheer port count -- having 128 100-Gbit/s ports rather than the 64 on the Tomahawk 2 -- becomes important, Sankar says.” From Timothy Prickett Morgan at The Next Platform: “The Tomahawk-3 chips are aimed right at the bandwidth and latency needs of the hyperscalers and cloud builders, Rochan Sankar, senior director of core switch group at Broadcom, tells The Next Platform. This includes the massive amount of stateless servers they these companies deploy, which have a huge amount of chatter between servers that can be spread across a datacenter with 100,000 machines (the so-called east-west traffic) as well as for deep learning networks and disaggregated flash storage, both of which also require a high radix switch (a fancy way of saying it has a lot of ports per ASIC) that in turn creates</description>
      </item>
      <item>
         <title>At a Glance: First 7-nm 400G PAM-4 PHY device available from Broadcom </title>
         <link>https://www.broadcom.com/blog/7nm-400g-pam4-phy</link>
         <guid>https://www.broadcom.com/blog/7nm-400g-pam4-phy</guid>
         <pubDate>February 8, 2019</pubDate>
         <description>Broadcom's new 7-nm 400G PAM-4 PHY device, the BCM87400, is designed for data center and cloud infrastructure. Built on Broadcom’s state-of-art 7-nm Centenario™ 112G PAM-4 DSP platform, the device provides best-in-class 400G 8:4 gearbox performance while delivering the lowest power, enabling broad market adoption of 400bE links in hyperscale datacenter and cloud networks. Features Benefits Applications Industry-leading DSP performance and power efficiency enabling DR4/FR4 optical modules to meet IEEE standards and MSA specifications Enables network operators to effectively deploy 400GbE links to address increasing bandwidth demands Hyperscale cloud data center networks Proven low power PAM-4 architecture supporting multiple optics front ends including EML, DML and silicon photonics Savings of 4W per 400G optical module in comparison with currently available CMOS solutions 400G QSFP-DD DR4 Module 400G QSFP-DD FR4 Module 400G OSFP DR4 Module 400G OSFP FR4 Module 7-nm PAM-4 PHY accelerates the adoption of 400 GbE network infrastructure From Lorenzo Longo, senior vice president and general manager of the Physical Layer Products Division at Broadcom: “With the general availability of 12.8-Tb/s switches, such as Broadcom’s Tomahawk® 3, hyper-scale data center operators and cloud providers will be leveraging the 400GbE ports in these switches to address increasing demand for higher bandwidth. Our low power 7-nm Centenario PAM-4 DSP is essential to support high density 400G connectivity using QSFP-DD and OSFP optical modules, accelerating the adoption of 400 GbE network infrastructure. The currently available 16 nm 400G PHYs have been used to enable engineering prototypes and testing of 100G per lambda optical components. Our 7-nm Centenario 400G PHY enables high volume deployment of 400G optical modules in hyperscale data centers.” From Dale Murray, principal analyst at LightCounting Market Research: “Broadcom demonstrates its technology lead in the PAM4 market segment with the sampling of the industry’s first 7-nm 400G PAM4 PHY. A savings</description>
      </item>
      <item>
         <title>Demartek reveals storage benchmark results of new Broadcom NVMe over Fibre Channel solutions for the connected enterprise </title>
         <link>https://www.broadcom.com/blog/demartek-reveals-storage-benchmark-results-of-new-broadcom-nvme-over-fibre-channel</link>
         <guid>https://www.broadcom.com/blog/demartek-reveals-storage-benchmark-results-of-new-broadcom-nvme-over-fibre-channel</guid>
         <pubDate>July 19, 2018</pubDate>
         <description>NVMe over Fibre Channel (NVMe/FC or FC-NVMe) has been a hot topic amongst enterprise IT professionals because it enables new levels of storage performance using today’s fastest all-flash arrays. And with a full range of storage solutions now on the market that support NVMe/FC and deliver Gen 6 Fibre Channel performance, Demartek recently tested some of these solutions in the lab to see exactly how well they performed. Demartek based its benchmark testing around the NetApp AFF A700s enterprise storage system running NetApp ONTAP 9.4, connected to a Brocade® G620 switch and Emulex® LPe32002 HBAs by Broadcom, which together deliver the latest 32GFC Gen 6 Fibre Channel performance. The tests compared two protocols within the Storage Area Network (SAN), NVMe/FC and SCSI-FCP. The results clearly favored the NVMe/FC configuration as it produced 58 percent higher IOPS while also providing up to 34 percent lower latency on the same hardware. Existing NetApp AFF A700s customers can achieve these huge performance improvements simply by upgrading to ONTAP 9.4 software. It’s no wonder enterprise IT professionals are excited about NVMe! Download the full report at demartek.com/modernsan Enterprise storage systems are tackling strenuous workloads including Big Data, analytics, deep learning, and A.I. These applications will only grow in size and scale as we move forward in our data-driven future. NVMe/FC is purpose-built to handle these and other demanding tasks across the storage network. NVMe over Fibre Channel evolves as the new standard for enterprise networks The enterprise is transforming with NVMe, a protocol that first emerged as the ideal server interface for solid state storage. NVMe radically simplified access to SSDs over the PCIe bus compared to the traditional SCSI approach, and ultimately also provides lower latency and greater IOPS performance. Among Demartek’s other key findings was how easy NVMe/FC is to adopt and</description>
      </item>
      <item>
         <title>Broadcom releases Long Reach VDSL products</title>
         <link>https://www.broadcom.com/blog/broadcom-releases-long-reach-vdsl-products</link>
         <guid>https://www.broadcom.com/blog/broadcom-releases-long-reach-vdsl-products</guid>
         <pubDate>October 24, 2017</pubDate>
         <description>Broadcom’s Central Office (CO) and Customer Premise Equipment (CPE) products are now able to boost throughput via the ITU-T’s Long Reach VDSL (LR-VDSL) standard. LR-VDSL, consented in June with final comments likely by year end, takes advantage of vectoring in the ADSL2+ band to boost speed for already deployed, but un-vectored ADSL and VDSL equipment. This throughput bump is enabled via software upgrade rather than equipment replacement, vastly reducing the cost and effort required to implement LR-VDSL. Broadcom offers LR-VDSL support today for the BCM65200 CO and BCM63138 CPE device families, with planned support for the BCM65300 CO and BCM63158 CPE families.

In many long loop environments, operators chose not to upgrade ADSL lines to vectored VDSL, as traditional VDSL will not train over long loops. The LR-VDSL standard addresses training over long loops, enabling operators to improves service rates for all users in the cable while offering the same reach as legacy ADSL technologies. As an added benefit, the speed boost will incrementally improve as more and more customers move to LR-VDSL and fewer customers remain on un-vectored lines.

“We are pleased to offer a standards-compliant, end-to-end software solution that extends the lifetime of the existing copper infrastructure,” said Greg Fischer, senior vice president and general manager of the Broadband Carrier Access division. &quot;The test results from ongoing operator field trials of LR-VDSL are impressive, and we believe will reinforce the market demand for this solution.”

Broadcom will be demonstrating LR-VDSL in booth MR10 at the Broadband World Forum (BBWF) trade show in Berlin, Oct. 24-26. 
</description>
      </item>
      <item>
         <title>New Intel Xeon Scalable Platform-based servers and all-flash storage arrays drive demand for faster Broadcom networking solutions</title>
         <link>https://www.broadcom.com/blog/new-intel-xeon-scalable-platform-based-servers-and-all-flash-storage-arrays-drive-demand-for-faster-broadcom-networking</link>
         <guid>https://www.broadcom.com/blog/new-intel-xeon-scalable-platform-based-servers-and-all-flash-storage-arrays-drive-demand-for-faster-broadcom-networking</guid>
         <pubDate>March 13, 2018</pubDate>
         <description>Data centers are undergoing a rapid transformation due to the availability of all-flash arrays (AFAs), Gen 6 Fibre Channel and the Intel® Xeon® Scalable Platform which are helping drive datacenter modernization by offering significant architectural and performance advances. According to Intel, the Intel Xeon Scalable Platform will help customers and partners find new ways to transform data centers to meet the requirements of new cloud, networking and artificial intelligence applications. Advances in Intel Xeon scalable architecture This highly scalable design delivers a solution for almost every need. At the high end of the family, these new processors offer 28 cores-per-socket and support up to eight sockets with up to three Intel UPI (UltraPath Interconnect) uplinks. They also support up to 1.5 TB of 2,666MHz DDR4 memory. The number of PCI Express lanes per CPU has increased to 48 lanes of PCIe 3.0 per CPU. The Xeon Scalable Platform was designed specifically for data center applications with a new mesh-based architecture that reduces latency at high core counts. Mesh architecture offers improved connectivity between processor cores compared with the ring architecture that has been a feature of Intel's data center processors since 2009. In addition, the Xeon Scalable Platform significantly increases memory bandwidth by almost 50 percent by incorporating an additional two memory channels, moving up to a hex-channel architecture versus the previous quad-channel platform. With a total of six memory channels available to the processor and an increase in memory speed, memory-bound applications will experience a dramatic boost in performance. With the significant advances in compute power delivered by the Xeon Scalable Platform, the spotlight is now on storage systems and the network to deliver the necessary performance to match. On the storage side, new AFAs are delivering the performance required to solve storage bottlenecks with exponentially better IOPS and</description>
      </item>
      <item>
         <title>Word on the Street: Media roundup for Brocade Gen6 Fibre Channel switch, port blade, and automation software</title>
         <link>https://www.broadcom.com/blog/word-on-the-street-media-roundup-for-brocade-gen6-fibre-channel-switch-port-blade-and-automation-software</link>
         <guid>https://www.broadcom.com/blog/word-on-the-street-media-roundup-for-brocade-gen6-fibre-channel-switch-port-blade-and-automation-software</guid>
         <pubDate>May 14, 2018</pubDate>
         <description>From Dave Raffo at SearchStorage: “The REST APIs work with Brocade's storage partner management software. The PyFOS scripts make calls to the APIs and Ansible to automate deployment, management, provisioning and monitoring tasks. “We're bringing Fibre Channel into the open world,” Shimomura said of the automation. “We're bringing information out of the Fibre Channel SAN and pulling it into other types of reporting. That makes it easier, so it's not just the storage team that can do it. DevOps covers many areas outside of storage.” From Adam Armstrong at StorageReview: “On the automation side, the technology can be used in so many ways but in particular to reduce time on deployment, configuration, and troubleshooting as well as time spent maintaining SLAs. The Brocade switches will be leveraging Ansible on this end for automation and orchestration. The Brocade switch also introduces REST APIs directly broadening the range of choices for management solutions. ” From Chris Mellor at The Register: “REST APIs have been added to the Brocade switch and management products and these can be used to automate fabric inventory, provisioning, and operational state monitoring tasks. The open-source, Python-based PyFOS scripting language is supported and can be used in SAN management. &quot;The software automation tool Ansible has been integrated to support automation and orchestration across a Brocade infrastructure.” From Anna Ribeiro at CTR: “Brocade Gen 6 Fibre Channel is purpose-built to handle the low-latency and monitoring requirements for NVMe storage. New software optimization reduces latency by nearly 15 percent for Gen 6 platforms and enhanced integrated network sensors provide new insight into network health and performance of NVMe traffic. “Broadcom understands the nuances that go into infrastructure management and what tasks can benefit from Brocade automation. By introducing REST APIs directly into Brocade switch and management products, Broadcom offers a broad</description>
      </item>
      <item>
         <title>Open sourcing a new generation of switch Software Development Kit accelerates network innovation</title>
         <link>https://www.broadcom.com/blog/open-sourcing-a-new-generation-of-switch-software-development-kit-accelerates-network-innovation</link>
         <guid>https://www.broadcom.com/blog/open-sourcing-a-new-generation-of-switch-software-development-kit-accelerates-network-innovation</guid>
         <pubDate>January 31, 2018</pubDate>
         <description>Back in the 1990s, Ethernet switches were relatively simple devices which could be configured by direct access to the registers. As the number of features and resources that needed to be programmed increased, it was no longer practical to continue to access device registers directly. Chip vendors began providing functional APIs, which abstracted details of the register accesses from the user. These APIs enabled portability of software across multiple generations of devices, making it easier for OEMs and end users to maintain their software. Thus, the Software Development Kit (SDK) was born. To ease the development task of NOS developers, ASIC vendors took upon themselves the ownership of describing the expected network behavior, such as Layer 2 and Layer 3 functions, by creating new APIs. Over the next couple of decades, as networking chips became increasingly complex, there was corresponding growth in both the number and complexity of these APIs. Today’s state-of-the-art upper layer software needs much more direct visibility and control over hardware resources to be able to adapt rapidly to changing network conditions. Reliability and performance requirements have also increased significantly, thereby creating a need for SDK features such as asynchronous API calls, batched operations, atomic transactions and in-service software upgrades. The advent of Software Defined Networking (SDN), where a centralized control programs all the switches in the network, has created additional requirements for automation, low latency and operational efficiency. A new generation of SDK is here: SDKLT To address these requirements, Broadcom has developed a new SDK, called SDKLT, based on the concept of Logical Tables (LT). In this paradigm, all the physical tables (such as MAC address tables, L3 route tables, TCAMs, registers, etc.) are exposed as logical table controls. Each logical table has an easy-to-understand structure consisting of one or more entries (rows) and keys</description>
      </item>
      <item>
         <title>At a Glance: Broadcom BCM56670, the industry’s first Ethernet switch built specifically for cellular fronthaul networks</title>
         <link>https://www.broadcom.com/blog/broadcom-bcm56670-at-a-glance</link>
         <guid>https://www.broadcom.com/blog/broadcom-bcm56670-at-a-glance</guid>
         <pubDate>June 18, 2018</pubDate>
         <description>With built-in IEEE 1914.3 Radio-over-Ethernet (RoE) mappers, the BCM56670 Monterey Ethernet switch is the first in the market that performs CPRI/Ethernet interworking and allows direct connections to CPRI-based radios and baseband processors. Features Benefits Applications Integrated IEEE 1914.3 Radio-over-Ethernet (RoE) mappers with built-in de-jitter buffers Bridges the legacy cellular world and tomorrow’s 5G network Base station line cards 64 SerDes: − 24x 25G dual-mode CPRI/ Ethernet − 2.5G to 25G CPRI − 1GbE to 100GbE − 16x 25-Gigabit Ethernet (1GbE to 100GbE) − 24x 10-Gigabit Ethernet (1GbE to 10GbE) Consolidates all radio traffic onto a standard, Ethernet-based infrastructure Cellular front-haul aggregation nodes Terabit-class capacity to meet the 10x increase in capacity needed by 5G networks Hardware support for key 5G requirements, including nanosecond-scale synchronization Base stations Broadcom is committed to 5G innovation “We are very pleased to announce another ground-breaking solution that addresses fundamental problems and opens a new, major market for our switching products,” said Ram Velaga, vice present and general manager, Switch Products at Broadcom. “The Monterey Ethernet switch is an excellent example of Broadcom’s deep commitment to 5G innovation and strategic R&amp;D investment.” &quot;5G will drive an order of magnitude increase in network-bandwidth requirements owing to faster radios and denser networks coupled with larger base stations serving more radios,” said Bob Wheeler, principal analyst at The Linley Group. “As a result, the industry is moving away from point-to-point CPRI radio links and towards a switched Ethernet infrastructure based on new protocols like eCPRI and IEEE 1914. Broadcom has developed a unique solution by extending its terabit Ethernet switch to address this new radio fronthaul application by adding support for Ethernet-based 5G radios as well as installed CPRI-based LTE radios.&quot; By the Numbers IEEE 802.1 Qbu pre-emption 42.5-mm x 42.5-mm FCBA package Up to 800 Gb/s and 496M</description>
      </item>
      <item>
         <title>At a Glance: Tomahawk® 3 is the first 12.8 Tb/s chip to achieve mass production</title>
         <link>https://www.broadcom.com/blog/at-a-glance-tomahawk-3-is-the-first-12-8-tb-s-chip-to-achieve-mass-production</link>
         <guid>https://www.broadcom.com/blog/at-a-glance-tomahawk-3-is-the-first-12-8-tb-s-chip-to-achieve-mass-production</guid>
         <pubDate>January 11, 2019</pubDate>
         <description>The StrataXGS® Tomahawk® 3 switch series is in high-volume production release, enabling deployment of Ethernet network equipment based on market-leading 12.8 Terabits/sec of switching and routing performance implemented on a single chip. The Tomahawk 3 series is the industry’s first fully production-qualified silicon family that supports high-density, line-rate 400GbE, 200GbE, 100GbE, and 50GbE interconnect for massive scale-out of software-defined cloud data centers, reducing cost-per-port by 75 percent and power-per-port by 40 percent compared to existing solutions. Features Benefits Applications 12.8 Tb/s of switching and routing performance implemented on a single chip Enables the next major leap in hyperscale data center network throughput, supporting 32 x 400GbE, 64 x 200GbE, or 128 x 100GbE line rate switching and routing Single-chip solution for data center Top-of-Rack, aggregation and spine switches Provides 256 integrated SerDes with 56G-PAM4, supporting 200GbE and 400GbE Over 75% reduction in cost per port and 40% reduction in power per port versus alternatives NVMe storage disaggregation Up to 32x400GbE, 64x200GbE or 128x100GbE ports More than doubles the IP route forwarding scale compared to previous Tomahawk devices Deep Learning networks Our customers report: Tomahawk 3 gives them an immediate path for buildout &quot;We commend Broadcom for achieving the Tomahawk 3 production milestone, and for their continued generational leadership in cloud switching technology,&quot; said Wade Shao, Director, Technology &amp; Engineering Group at Tencent. &quot;Having a robust, ready-to-deploy 12.8 Tb/s switching element is a key building block for our next-generation, leaf-spine infrastructure. It flattens the network topology and delivers the performance per Watt needed to further scale out our distributed applications.&quot; &quot;The rise of AI workloads and RDMA-enabled storage disaggregation is driving cloud server and storage interconnect to much higher speeds, and Tomahawk 3 now provides the industry with its first deployable solution for these applications with high-density 200/400GbE,&quot; said Zhenyu Hou,</description>
      </item>
      <item>
         <title>Broadcom’s Tomahawk&amp;reg; 3 Ethernet switch chip delivers 12.8 Tb/s of speed in a single 16 nm device</title>
         <link>https://www.broadcom.com/blog/broadcom-s-tomahawk-3-ethernet-switch-chip-delivers-12-8-tbps-of-speed-in-a-single-16-nm-device</link>
         <guid>https://www.broadcom.com/blog/broadcom-s-tomahawk-3-ethernet-switch-chip-delivers-12-8-tbps-of-speed-in-a-single-16-nm-device</guid>
         <pubDate>December 20, 2017</pubDate>
         <description>Building upon the success of the StrataXGS® architecture already widely deployed in existing data centers, today’s announcement of the new Tomahawk 3 Ethernet switch chip heralds the next phase in hyperscale data center interconnect. At 12.8 Tb/s, Tomahawk 3 doubles the bandwidth of other market alternatives currently available. Broadcom has achieved this doubling of bandwidth in under 13 months from the sampling of its 6.4 Tb/s Tomahawk 2, showcasing Broadcom’s unprecedented product line velocity. Broadcom has achieved unparalleled silicon area and power efficiency, enabling implementation of Tomahawk 3 in the same 16nm process node as Tomahawk 2, while reducing power per 100GbE port by 40 percent and cost by up to 75 percent. The 16nm node is mature and well-characterized, allowing a very fast ramp to mass production and time-to-market. Tomahawk 3 uses the same SDK and APIs familiar to existing Tomahawk customers, further reducing development time and effort. Tomahawk 3: By the numbers 12.8 Tb/s multilayer L3 Ethernet switching Configurable as 32 ports 400GbE, 64 ports 200GbE or 128 ports 100GbE 256 dual-mode, high-reach SerDes, each of which supports 56G-PAM4 and 28G-NRZ over long-reach optics, Direct Attach Copper (DAC) and backplanes Low-latency StrataXGS pipeline architecture Delivers 40% lower power consumption per 100GbE switch port and up to 75% lower cost per 100GbE switch port Integrated 12.8Tb/s shared-buffer architecture offers 4X higher burst absorption and provides the highest performance and lowest end-to-end query completion times (QCT) for RDMA over Converged Ethernet (RoCEv2) based workloads Broadview™ Gen 3 network instrumentation feature set and software suite, providing comprehensive visibility to network operators into packet flow behavior, traffic management state, and switch internal performance Comprehensively and configurably supports all packet processing and traffic management requirements for next-gen hyperscale network use cases: &gt;2X IP route forwarding scale, 2X ECMP scale, Dynamic Load Balancing and</description>
      </item>
      <item>
         <title>At a Glance: The Broadcom Stingray PS1100R delivers breakthrough performance and efficiency for NVMe-oF storage target applications</title>
         <link>https://www.broadcom.com/blog/at-a-glance--the-broadcom-stingray-ps1100r-delivers-breakthrough-performance-and-efficiency-for-nvme-of-storage-target-applications</link>
         <guid>https://www.broadcom.com/blog/at-a-glance--the-broadcom-stingray-ps1100r-delivers-breakthrough-performance-and-efficiency-for-nvme-of-storage-target-applications</guid>
         <pubDate>August 29, 2018</pubDate>
         <description>Broadcom's Stingray™ PS1100R 100G PCIe adapter solution for Ethernet connected NVMe over Fabrics (NVMe-oF) storage applications is now available for order. It features unprecedented performance per watt in a fully integrated adapter solution, enabling fast development and delivery of storage services and Ethernet fabric-attached Flash solutions. The Stingray SoC offers significant advantages by integrating Broadcom’s market-proven 100G NetXtreme® Ethernet NIC, eight high-performance 3GHz 64-bit ARM® v8 Cortex®-A72 cores, hardware accelerators for cryptographic security, RAID and Dedup, and PCIe Gen 3.0 connectivity. This high level of SoC integration minimizes the overall power consumption and chip area compared to multi-chip solutions. The open and highly programmable ARM based architecture of the Stingray SoC provides a flexible software defined platform. Fabricated in a 16nm FinFET+ process, the Stingray SoC enables the highest available A72 CPU performance to deliver breakthrough data plane acceleration and storage performance in a compact form factor. Features Benefits Applications Highly optimized 100G Ethernet NVMe-oF storage target adapter Industry leading bandwidth and IOPS in a small adapter form factor NVMe-oF (RDMA-based) storage target Fully integrated NVMe-oF solution with hardware accelerators for crypto, RAID and Dedup Lower power consumption and smaller chip area compared to multi-chip solutions Block/object storage target NVMe/TCP support Ease of disaggregated storage deployment in any TCP network NVMe-oTCP (TCP-based) storage target Stingray delivers high performance, scalability and efficiency at low power “We have a good history of working with Broadcom and believe their new Stingray adapter will help enable a new generation of NVMf composable infrastructure as it delivers extreme performance, availability and storage offload functions in a standard PCIe adapter,” said Scott Hamilton, senior director of product management at Western Digital’s Data Center Systems business unit. “We look forward to collaborating with Broadcom to help enable new breakthrough levels of scalability, efficiency and performance for Fast</description>
      </item>
      <item>
         <title>An unparalleled track record of Wi-Fi and Bluetooth innovation</title>
         <link>https://www.broadcom.com/blog/an-unparalleled-track-record-of-wi-fi-and-bluetooth-innovation</link>
         <guid>https://www.broadcom.com/blog/an-unparalleled-track-record-of-wi-fi-and-bluetooth-innovation</guid>
         <pubDate>February 2, 2018</pubDate>
         <description>This is part of a series examining Broadcom's role innovating the past decade of wireless technology. Long before the smartphone existed, there was infrastructure for Wi-Fi- and Bluetooth-enabled devices like laptops, PDAs and feature phones. Wi-Fi access was available in most places including airports, hotels, coffee shops and restaurants. And a large number of consumer devices, such as wireless headsets, portable speakers and printers, were equipped with Bluetooth. The smartphone market has been a strong magnet for Wi-Fi and Bluetooth innovations. Just in the last 10 years, smartphones have featured various generations of Bluetooth standards from v2.0 to v4.0 and now v5.0. Similarly, successive generations of Wi-Fi standards, from 802.11g to 802.11n to 802.11ac, have been adopted in smartphones. One of the biggest innovations, however, was the concept of combo chips that included both Bluetooth and Wi-Fi functions in a single chip. These combo chips ensured that the two wireless technologies that shared the 2.4 GHz spectrum coexisted and worked well in tandem. In the years ahead, mobile data consumption will continue to grow, and the number of devices connected to the Internet will soon surpass 20 billion, according to Gartner, Inc. Wi-Fi and Bluetooth will play a vital role in connecting smartphones to an expanding network of wireless access points and connected devices. Smartphone users will consume more video content and upload even larger amounts of multimedia data in the future. The mobile industry will look to new innovation in Wi-Fi to support this growing demand for video streaming and other bandwidth-intensive applications such as cloud data backup and mobile augmented reality (AR). The combination of 802.11ax Wi-Fi and Bluetooth 5.0 (BT5.0) will be essential in addressing the growing bandwidth needs and enabling ubiquitous connectivity across a vast number of connected devices, especially in crowded areas like stadiums, city</description>
      </item>
      <item>
         <title>Word on the Street: Media roundup for SDKLT</title>
         <link>https://www.broadcom.com/blog/word-on-the-street-media-roundup-for-sdklt</link>
         <guid>https://www.broadcom.com/blog/word-on-the-street-media-roundup-for-sdklt</guid>
         <pubDate>March 16, 2018</pubDate>
         <description>From Tom Hollingsworth at Gestalt IT: “Broadcom is making two pretty important leaps here. The first is a move to a logical table system. This is going to really change the way that we look at programing networks. I hope more people choose to adopt a similar system in the future, as the complexity necessary to write against an API is lessened and more people can start writing code without needing to know the intricacies of a given platform. “The other leap is the release under Apache 2.0. This means that there is a huge opportunity for new developers to come into the market and write apps for Tomahawk switches today as well as apps for newer switches tomorrow. The influx of developers on this platform should give Broadcom a huge lead in SDN development in the coming months, if not years.” From Bob Wheeler of the Linley Microprocessor Report: “In a bold departure, Broadcom is publishing the source code of its new software-development kit (SDK) for switch-chip drivers. The SDK is the company’s third set of open driver APIs, but it’s the first that allows others to modify and redistribute the source code. It also adopts a new API approach based on logical tables, so Broadcom calls it the SDKLT.” From Timothy Prickett Morgan at The Next Platform: “The SDKs have been the most important black box, and now Broadcom is getting out in front of its peers and ripping the top off. Or, more precisely, it has come up with an entirely new, yet backwards compatible, SDK that has an architecture that best fits the way companies – particularly the hyperscalers and cloud builders that represent somewhere north of 50 percent of the datacenter switching ports sold each year these days – want to use switch SDKs going</description>
      </item>
      <item>
         <title>At a Glance: The Broadcom 9400-series MegaRAID® controllers are the world’s fastest, delivering over 1.7M IOPS with NVMe</title>
         <link>https://www.broadcom.com/blog/at-a-glance--the-broadcom-9400-series-megaraid--controllers-are-the-world-s-fastest--delivering-over-1-7m-iops-with-nvme</link>
         <guid>https://www.broadcom.com/blog/at-a-glance--the-broadcom-9400-series-megaraid--controllers-are-the-world-s-fastest--delivering-over-1-7m-iops-with-nvme</guid>
         <pubDate>July 11, 2018</pubDate>
         <description>Broadcom is shipping the world’s fastest NVMe/SAS/SATA Tri-Mode SERDES MegaRAID family of controllers, enabling customers to experience new levels of read and write performance. Broadcom’s new family of high-performance 9400-series MegaRAID controllers support up to 16 internal and external ports on a single chip while providing unmatched IOPS performance, data protection and the lowest power consumption for server OEMs, system integrators and storage manufacturers. With over 1.7M IOPS, the 9400-series RAID controllers provide the most performance and robust data protection for all intensive cloud scaling, web services, business intelligence and OLTP applications used within traditional and hyper-converged server platforms. With industry-leading power consumption starting at just 10 watts, the 9400-series is the most efficient RAID controller card in its class and provides the necessary power savings for any data center environment. Features Benefits Applications Unmatched IOPS performance, data protection and the lowest power consumption for server OEMs, system integrators and storage manufacturers The 9400-series is the most efficient RAID controller card in its class and provides the necessary power savings for any data center environment High-port count SAS/SATA/NVMe controllers for direct-attached high connectivity applications Built for high IOPS and data-intensive analysis applications, all-flash-based storage systems, hybrid SSD and HDD solutions, and meets or exceeds performance requirements Enables an easy, long-term storage growth strategy in practically any direct-attached storage scenario External storage requiring high connectivity SAS/SATA interface for host or drive side connect The industry's first purpose-built high IOPS performance NVMe/SAS/SATA Tri-Mode RAID controller Certified Broadcom OS Support for Microsoft Windows, VMware, Linux (SuSE, Red Hat), and FreeBSD Tri-Mode connectivity enabling maximum data center flexibility Flexible choices with improved data rates and robust data protection “Growth in data consumption is driving a transformation within client on how they utilize IT to improve business results. Lenovo is working with Broadcom to address</description>
      </item>
      <item>
         <title>Broadcom's next-generation Inband Telemetry solution designed for Hyperscale Datacenters is here</title>
         <link>https://www.broadcom.com/blog/broadcoms-next-generation-inband-telemetry-solution-designed-for-the-hyperscale-datacenters-is-here</link>
         <guid>https://www.broadcom.com/blog/broadcoms-next-generation-inband-telemetry-solution-designed-for-the-hyperscale-datacenters-is-here</guid>
         <pubDate>November 9, 2018</pubDate>
         <description>Inband telemetry—also referred to as dataplane telemetry—has become mature in concept, and the industry has been looking for real-world deployment use cases as proof that real-time, inband, end-to-end packet and flow monitoring capabilities afford real value for managing and scaling networks. Inband telemetry offers a cost-effective method for monitoring and analyzing when packets enter and exit the network, the path packets and flows take through the network, the rate at which packets arrive at each hop, and how long packets spend at each hop—an indication of excessive latency and possible congestion. Capturing such details at the packet level is simply not possible with out-of-band management techniques. The Inband Flow Analyzer (IFA) uses Broadcom’s Inband Telemetry technology to enable a flexible packet and flow monitoring solution that can scale to monitor a large number of flows typically required in a Mega Scale Data Center (MSDC). The use of a separate header overcomes the inability to alter a flow’s Layer 4 header, while the header’s timestamps and metadata make it possible to analyze application-level TCP, UDP, and encapsulation-agnostic tunneled flows end-to-end across a network. Optionally, the IFA header and its content can also be added by smart NICs in either hardware or software, extending the real-time visibility to server elements as well. A programmable dataplane enables metadata gathering from the network elements in an optimal manner for some data center use cases. IFA complements a programmable dataplane through innovative mechanisms to enable next-gen use cases, such as measuring application latency in cloud networks, in a scalable way. Broadcom’s IFA is being deployed in commercial networks at Mega Scale Data Center (MSDC) networks to deliver optimal packet and flow monitoring capability that leverages the existing Inband Telemetry featuring Broadcom’s Trident 3 silicon. Broadcom is working with its industry leading partners, including Alibaba and</description>
      </item>
      <item>
         <title>Industrial Ethernet: Connecting the Industry 4.0 Ecosystem</title>
         <link>https://www.broadcom.com/blog/industrial-ethernet-connecting-the-industry-4-0-ecosystem</link>
         <guid>https://www.broadcom.com/blog/industrial-ethernet-connecting-the-industry-4-0-ecosystem</guid>
         <pubDate>April 27, 2018</pubDate>
         <description>Since the introduction of the 10BASE-T standard three decades ago, Ethernet has virtually penetrated every networking hardware imaginable. These include many home electronics that most of us use and depend on in our daily life, such as personal computers, internet routers, printers and TVs. Similar to the Ethernet communications used in homes and offices, there have been various Ethernet implementations via twisted pair cables that enable machine-to-machine communications in industrial automation applications. Currently, there are more than a dozen of Ethernet protocols used in the industrial arena, including EtherNet/IP, Modbus TCP, Profinet, EtherCAT, Ethernet Powerlink, BACnet, and SERCOS III. Though there are a variety of programmable logic controller (PLC) communication protocols still in use today, like CompoBus, DeviceNet and RS-485, the majority of non-Ethernet protocols are proprietary and often not interoperable, inhibiting the ability to interconnect devices on different platforms. Ethernet has become the de-facto communication technology of choice to connect and tie most things—if not all things digital—together in manufacturing environments. Not only does Ethernet support multiple protocols, there are foundational benefits of Ethernet that are essential to the next wave of digital industrialization, commonly known as the Industry 4.0. BENEFITS OF INDUSTRIAL ETHERNET Seamless connectivity inside and outside of factory premises The industrial automation market is highly fragmented. There are more than 50 PLC vendors, and the number of data bus and Ethernet protocols being supported exceeds 20. Interoperability between different manufacturer devices and/or machines has been a major issue that limits the expansion of PLC networks. As industrial enterprises transition to Industry 4.0, technical requirements will grow in terms of connectivity and reach. Transitioning industrial protocols to Ethernet would enhance interoperability and allow seamless connectivity with an expanding ecosystem of Ethernet devices inside and outside of the factory premises. Further, Ethernet is the portal to the Internet</description>
      </item>
      <item>
         <title>Broadcom’s AFBR-S50 takes optical distance and motion measurement to a new level</title>
         <link>https://www.broadcom.com/blog/broadcom-s-afbr-s50-takes-optical-distance-and-motion-measurement-to-a-new-level</link>
         <guid>https://www.broadcom.com/blog/broadcom-s-afbr-s50-takes-optical-distance-and-motion-measurement-to-a-new-level</guid>
         <pubDate>March 29, 2018</pubDate>
         <description>Optical distance measurement has a variety of uses in industrial applications. For years, time-of-flight (ToF) sensors have been used in measurement and monitoring tasks in factory automation and production. Some of the common uses include monitoring the position of an object in production as it is being transported through a conveyor belt, checking the orientation and surface profile of an object before it is packaged, and analyzing the movement pattern of a robotic arm to minimize risks of failures and ensure human safety. There have been new use cases for optical distance measurement that require precise 3D information and extended range. Meeting these requirements in the presence of optical interferences can be very challenging, especially considering the frequent opening and closing of doors allowing external light to get inside a manufacturing plant or drastic change in lighting condition around a robotic welding cell. Broadcom has addressed these requirements in its latest AFBR-S50 offering and taken optical distance and motion measurement performance to a new level. The AFBR-S50 is based on the indirect optical time-of-flight (ToF) principle. (See Figure 1, below.) ToF sensor technology has been developed with a special focus on applications that need high speed and accuracy measurement with extended range, small footprint and low power consumption. With unrivaled ambient light suppression of up to 200k Lux, the technology can also be used in outdoor environments. Figure 1: Principle of Optical ToF Distance/Motion Measurement Broadcom’s AFBR-S50 is a multipixel ToF sensor platform that supports up to 3,000 frames per second with up to 16 illuminated pixels out of 32. The device is also suited for gesture sensing applications that require high speed measurement (with rates up to 3 kHz). The AFBR-S50 is capable of providing accurate distance measurement of an object up to 10 meters with an accuracy always</description>
      </item>
      <item>
         <title>Data center infrastructure – a vital part of our data-driven future</title>
         <link>https://www.broadcom.com/blog/data-center-infrastructure---a-vital-part-of-our-data-driven-future</link>
         <guid>https://www.broadcom.com/blog/data-center-infrastructure---a-vital-part-of-our-data-driven-future</guid>
         <pubDate>November 30, 2018</pubDate>
         <description>The world is becoming increasingly hyper-connected as more and more devices get added to the network—from smartphones to digital appliances to autonomous machines. The total number of connected devices is expected to surpass 50 billion units by 2022. The aggregate amount of data demanded by these devices is enormous. Further, the explosion in user data demand, applications and connected services has a multiplicative effect on network traffic, which typically translates to 5X or more in machine-to-machine traffic generated in the data center. Data centers are an integral part of modern network infrastructure. Vast amounts of information travel to and from the data center each second supporting various network functions, applications and services. For many enterprises, data centers play a vital role in delivering IT services and providing storage, communications and networking to support day-to-day business operations. Data centers enable a broad range of business-critical applications such as enterprise resource planning (ERP), customer relationship management (CRM), messaging and collaboration, and business analytics. Similarly, consumers—through their mobile devices—rely on data centers for essential applications, such as email, cloud storage, video on demand (VoD), and smart digital assistants. Data centers also empower a growing number of networked devices like wireless sensors, surveillance cameras, and smart light bulbs with analytics and intelligence, making daily lives more convenient and comfortable. The impact of data centers on modern digital lifestyles is immeasurable. Global data traffic continues to accelerate with no sign of slowing down. There are multiple megatrends impacting the network infrastructure which will further drive the growth and expansion of data centers. These include: Artificial Intelligence (AI) Intelligent Edge Computing Software-Defined WAN (SD-WAN) 5G Wireless Virtual Reality (VR) Augmented Reality (AR) Next-generation data centers will be equipped with more advanced networking, computing and storage technologies to provide greater IT agility and efficiency, tackling new challenges</description>
      </item>
      <item>
         <title>BCM58800 NetXtreme® S-Series named Linley Group’s Best Embedded Processor </title>
         <link>https://www.broadcom.com/blog/bcm58800-netxtreme-s-series-named-linley-group-s-best-embedded-processor</link>
         <guid>https://www.broadcom.com/blog/bcm58800-netxtreme-s-series-named-linley-group-s-best-embedded-processor</guid>
         <pubDate>March 6, 2018</pubDate>
         <description>As part of its annual program, The Linley Group has awarded the Broadcom BCM58808H NetXtreme-S SoC network controller its 2017 Analyst Choice Award for Best Embedded Processor. “We were impressed by the performance, low power and silicon integration of the BCM58800 processors,” commented Linley Gwennap, principal analyst of The Linley Group. &quot;Addressing emerging markets like cloud-based SmartNIC offload and NVMe-oF storage disaggregation, it’s the first SoC to combine powerful 3GHz A72 cores and a 100Gbps NIC in a 16nm FinFET process. The combination of these capabilities and immediate product availability puts Broadcom well ahead of its competition.” NetXtreme-S as SmartNIC for tomorrow’s data centers SmartNICs have become a critical component in today’s cloud infrastructure. A SmartNIC allows a cloud provider to move the network, storage and management workloads from the x86 CPUs to the SmartNIC, freeing up costly x86 cores that the cloud provider can sell to its customers. Since these workloads can use almost 50percent of all cores in a server processor, moving them to a SmartNIC significantly lowers the overall cost of business for cloud providers. In addition, a SmartNIC provides clear separation between the cloud provider’s network and the x86 CPUs used by its customers. This clear separation proves critical in light of the recently revealed Spectre and Meltdown security vulnerabilities that might allow tenants to access the cloud provider's network when such physical separation of memory resources is not in place. Cloud providers also look to use SmartNIC as a way to deliver deterministic SLAs to their customers. When all infrastructure workloads owned by the provider are offloaded to a SmartNIC, customers using the x86 host can get consistent service attributes, independent of the provider's workloads. Critical elements of a modern SmartNIC include: General-purpose, high performance CPUs for fast feature velocity of network and storage offloads</description>
      </item>
      <item>
         <title>Broadcom G.fast expands ultrafast broadband on copper-fiber infrastructure</title>
         <link>https://www.broadcom.com/blog/broadcom-g-fast-expands-ultrafast-broadband-on-copper-fiber-infrastructure</link>
         <guid>https://www.broadcom.com/blog/broadcom-g-fast-expands-ultrafast-broadband-on-copper-fiber-infrastructure</guid>
         <pubDate>February 14, 2019</pubDate>
         <description>Exponentially growing demand for ultrafast broadband speeds among consumers, driven by video streaming services as well as an increasing number of IoT devices that need access to the cloud, is driving the wider deployment of fiber by service providers as the go-to medium for fixed-broadband access. This consumer demand, combined with the reality that fiber can be delivered over greater distance than copper cable solutions, has resulted in fiber being deployed in nearly all greenfield opportunities of service provider networks. New copper technologies emerging as ideal complements to fiber However, for brownfield locations -- the vast majority of consumer locations -- universal fiber deployment can be both problematic and costly. In these locations, particularly in the final meters leading up to and within the customer premises, running fiber cable can be heavily regulated, and in some cases impossible. In addition, the installation can be complicated and time consuming, which significantly drives up the costs. For many service providers, G.fast technology has emerged as a powerful and compelling alternative to fiber, delivering fiber-like speeds over existing copper infrastructure. Broadcom G.fast devices are included in the applications below and allow service providers to extend their ultrafast broadband offerings where economic fiber delivery is unfeasible. As recently announced by the Broadband Forum, there is growing momentum behind G.fast technology, with more than 40 products certified interoperable and an increasing number of operators deploying these products. A complimentary report providing a G.fast market update accompanied the press release, and the highlights of that report are below. AT&amp;T is deploying G.fast service of 700Mb/s to U.S. customers in apartment and condominium buildings. BT Openreach has been delivering 300Mb/s service over 300m loops via G.fast for its U.K. customers for more than a year. G.fast’s ability to get quickly to market over existing copper complements their</description>
      </item>
      <item>
         <title>Broadcom’s 7-nm PAM-4 optical platform accelerates 400GbE deployments in hyperscale data center and cloud networks</title>
         <link>https://www.broadcom.com/blog/7nm-pam4-optical-platform-accelerates-400gbe-deployments</link>
         <guid>https://www.broadcom.com/blog/7nm-pam4-optical-platform-accelerates-400gbe-deployments</guid>
         <pubDate>March 4, 2019</pubDate>
         <description>New and emerging technologies such as AI, 5G, SD-WAN, and Cloud-Edge Computing are driving the expansion of network bandwidth. Data center operators and cloud providers are continually tasked to update their systems to cope with rising bandwidth demands. As such, 400GbE platforms are being increasingly adopted and deployed in hyperscale data center and cloud networks. 400GbE has become the main interface connecting the latest generation of multi-terabit switches with 100G per lambda optics. The evolution of network switch connectivity Since the introduction of Broadcom’s Trident II switch series in 2012, network switch chip bandwidth has increased tenfold, from 1.28 Tb/s to 12.8 Tb/s. Data speeds at the PHY, PMD IC and optical components have improved substantially, enabling high-speed communications between switches. Through each generation of Ethernet interfaces, from 40GbE to 100GbE, 200GbE and 400GbE, new SerDes had to be architected to ensure robust signal integrity over existing fiber cabling. While the optical data transmission per lane has risen from 10.7 Gb/s to 106 Gb/s, the dominant optical module form factor has remained QSFP due to faceplate density and adoption across a wide variety network applications. Next generation deployments at 400GbE are targeted with both QSFP-DD and OSFP module form factors. The benefits of 100G single-lambda optics The IEEE 802.3bs Task Force adopted 100G single-lambda optics as a building block for 400G Ethernet. Compared to multi-wavelength solutions (i.e. 4x25G), the single-wavelength or single-lambda (i.e. 1x100G) provides a more streamlined optical data path requiring fewer front-end optical components. Being able to deliver 100 Gb/s per wavelength over a single-mode fiber (SMF) simplifies the optical module design, effectively reducing the number of PMD ICs and III-V optical components by a factor of four. PMD ICs include linear laser drivers and transimpedance amplifiers (TIAs). III-V optical components include electro-absorptive modulated lasers (EMLs) and PIN</description>
      </item>
      <item>
         <title>Emulex Gen 7 Fibre Channel HBAs deliver up to 55 percent better Oracle Database 12c performance for all-flash arrays and NVMe devices</title>
         <link>https://www.broadcom.com/blog/emulex-gen-7-fibre-channel-hbas</link>
         <guid>https://www.broadcom.com/blog/emulex-gen-7-fibre-channel-hbas</guid>
         <pubDate>March 19, 2019</pubDate>
         <description>Fibre Channel Host Bus Adapters by Broadcom, deployed with the latest servers and all-flash arrays, enable unprecedented application acceleration. Demartek tested the newly released Emulex® LPE35002-M2 32GFC HBAs with a NetAPP AFF A800 all-flash array and a Brocade® Gen 6 switch and found significant latency and IOPs improvements, plus new operational efficiency and security enhancements. Demartek’s performance testing showed: 67 percent lower latency compared to Emulex Gen 6 32GFC FC HBAs 55 percent faster Oracle 12c OLTP performance vs. QLogic 32GFC HBAs More than 5 million IOPS--over 5x more than QLogic 32GFC HBAs As shown in the chart above, the Oracle Database 12c OLTP workload achieved 55 percent higher transactions per minute (TPM) with the Emulex Gen 7 32GFC HBA than with the QLogic 32GFC HBA. By changing the HBA to the Emulex Gen 7 model, customers are able to extract more application value out an existing server and storage investment. The chart below shows the results of Demartek IOPS performance testing of Emulex Gen 7 and Gen 6 HBAs, and the available QLogic 32GFC HBA. Demartek observed more than 5 million IOPS across two ports -- that's five times more IOPS than the QLogic HBA. Emulex Gen 7 HBAs feature a new fastpath design which provides hardware acceleration for Broadcom’s Dynamic Multi-core architecture, reducing latency for each transaction by processing I/O requests in hardware, thereby operating significantly faster than software-based solutions. These performance advances make Emulex Gen 7 HBAs well-suited to NVMe deployments and demanding applications with the capability to handle I/O spikes under peak workload conditions. The trunking feature on Emulex Gen 7 HBAs (also known as port aggregation), provides a method to aggregate physical ports together to form a single logical high-bandwidth port up to 128GFC for use by applications such as data warehousing and virtual machine</description>
      </item>
      <item>
         <title>Brocade® SAN Automation product recognized as category innovator</title>
         <link>https://www.broadcom.com/blog/brocade-san-automation-product-recognized-as-category-innovator</link>
         <guid>https://www.broadcom.com/blog/brocade-san-automation-product-recognized-as-category-innovator</guid>
         <pubDate>March 28, 2019</pubDate>
         <description>Broadcom announces that its Brocade SAN automation product has been named a winner in Storage Magazine and TechTarget's 2018 Products of the Year Awards in the Storage Management Tools category. Enterprise storage products were judged based on technological innovation, performance, ease of integration, ease of use and manageability, functionality and value.

The award validates Brocade’s vision and innovation in realizing the autonomous SAN. Today’s enterprises are facing additional challenges in resource-constrained organizations coupled with the pressure to deliver increasing support and business intelligence. According to data from ESG, 66 percent of IT decision makers say that IT is more complex than it was just two years ago.
 




Brocade addresses this issue by leveraging more than 20 years of networking experience to identify where automation can make the most impact eliminating manual tasks that consume nearly 50 percent of admins’ time. Because enterprises have different automation needs, Brocade Storage Networking has engineered a layered set of tools to enable SAN automation. These tools are built on a solid foundation of RESTful APIs to provide admins and third-party automation tools access to the Brocade family of directors and switches. 

Brocade SAN automation transforms the way IT can manage storage networks. As storage consumption grows, the Fibre Channel SAN network can also grow and scale easily through automation by increasing manageability and simplifying operations.

LEARN MORE




     Automating the Fibre Channel Data Center -- White Paper
			





     Download the free SAN Automation for Dummies eBook
			



VIDEO:  When you automate with Brocade, you accelerate EVERYTHING</description>
      </item>
      <item>
         <title>Trident 3 programmable switch outperforms competitor</title>
         <link>https://www.broadcom.com/blog/trident-3-programmable-switch-outperforms-competitor</link>
         <guid>https://www.broadcom.com/blog/trident-3-programmable-switch-outperforms-competitor</guid>
         <pubDate>May 30, 2019</pubDate>
         <description>Broadcom collaborated with the independent analyst firm Enterprise Strategy Group (ESG) to publish a performance validation report illustrating how Broadcom’s 3.2 Tb/s Trident 3 switch chip compares with a competitor's 3.2 Tb/s switch chip on important real-world data center metrics. Broadcom has designed its products based on extensive feedback from both cloud and enterprise data center customers as to which metrics are the most critical for real-world data center networks. For example, while the “unloaded” or “fall-through” RFC2544 measured latency is often quoted as a figure of merit for L2/L3 switches, data center operators know that this metric has little to no relevance to actual application performance — because a switch under such a test is passing purely synthetic traffic between individual port pairs, with no many-to-one bursts that are typical of real-world server-to-server and server-to-storage communications. Instead, network performance should be characterized while switches are experiencing steady-state or transient load levels, under real-world application conditions such as what one would observe in a Hadoop cluster of an HDFS file system. In addition, the relevant performance benchmark should focus on how the switch behaves when typical packet processing operations are performed on live network traffic — such as tunneling, access control filtering or packet editing. RFC 2544 metrics do not capture any of this. The ESG report focuses on emulating real-world data center workloads and comparing the relative performance of Trident 3 versus a competitor under a series of network test setups. Some of the key findings of the report are: TCP Performance — In a TCP Incast scenario common to Hadoop clusters, Trident 3 maintained steady transfer rates while the competitor suffered severe rate dips and showed almost 17% longer file transfer time. TCP + RoCEv2 Performance — When the TCP Incast test was augmented with simultaneous RoCEv2 lossless</description>
      </item>
      <item>
         <title>Broadcom micro optics shape tomorrow’s technologies through continuous innovation</title>
         <link>https://www.broadcom.com/blog/broadcom-micro-optics-shape-tomorrow-s-technologies-through-continuous-innovation</link>
         <guid>https://www.broadcom.com/blog/broadcom-micro-optics-shape-tomorrow-s-technologies-through-continuous-innovation</guid>
         <pubDate>July 25, 2019</pubDate>
         <description>For more than 25 years, Broadcom Micro Optics has provided the innovation to enable much of the technology we associate with our modern, information-driven society. From parallel optics used in high-density data transmission on the internet, to custom illumination optics that maximize resolution in semiconductor manufacturing, and high-power laser beam-shaping to assure success in delicate surgical applications, Broadcom’s micro optic products continue to empower our customers to take on new challenges. Broadcom diffractive optics drive unprecedented advances in semiconductor manufacturing Nowhere has the impact of Broadcom Micro Optics products been seen more clearly than in the enabling of the steady march toward smaller components and higher functional densities of logic and memory chips. Often referred to as “Moore’s Law,” these advances would not be possible without Broadcom’s custom diffractive optical diffusers, which have formed a critical component of semiconductor lithography exposure tools since the 1990s. By customizing the illumination system to the device being manufactured, customers rely on Broadcom’s diffusers to maximize the resolution of excimer-based steppers and scanners while also improving throughput and process window. As computer power and functional density have dramatically increased, so has the demand for data-communications bandwidth both in the network and inside the data center. Broadcom Micro Optics has been at the forefront of this advance with multiple generations of parallel diffractive optics supporting multiple data rates. These types of optics reduce optical noise in the path by carefully controlling the launch condition resulting in higher performance margins at lower overall optical power. Broadcom diffractive and refractive micro optics bring unparalleled performance to the consumer, industrial, medical and datacom sectors Closer to home, Broadcom’s structured light generator DOEs are found in hand-held devices such as bar code readers commonly used to scan boarding passes, packages and patient information. Similar components are routinely used in</description>
      </item>
      <item>
         <title>Broadcom's new Trident 4 and Jericho 2 switch devices offer programmability at scale</title>
         <link>https://www.broadcom.com/blog/trident4-and-jericho2-offer-programmability-at-scale</link>
         <guid>https://www.broadcom.com/blog/trident4-and-jericho2-offer-programmability-at-scale</guid>
         <pubDate>June 27, 2019</pubDate>
         <description>There has been a considerable amount of publicity regarding programmability in network switches. The basic vision is to enable programming of network switches similar to server programming. Toward that end, there is effort to define a data plane programming language and compiler tools to enable users to define the data plane functionality in network switches. Network switches are the plumbing of the internet and data centers of the world. They are expected to be highly available, have consistent performance with multiple features enabled and perform in the presence of congestion. The addition of new forwarding features to a switch needs to take into account the base L2 and L3 features, so the addition of new features is expected to be performed by OEMs and sophisticated customers because all features need to work simultaneously. Programmability, for most part, is not one of the top considerations for the end users. There are benefits to having programmability in network switches. OEMs can differentiate their switch products in terms of forwarding capability by defining data plane functionality specific to their market segment. Three aspects need to be considered when allowing for programmability in switches: feature capacity, feature concurrency and programmability. A network switch has to excel in all three vectors for it to be production grade. A switch has to have sufficient capacity to allow table scales for the relevant features. There is a need for several features to be concurrently available because the same switch can be used in different places in the network (PIN). Finally, having programmability further provides the flexibility for an OEM or user to tailor the switch to their specific need. Broadcom devices (DNX and XGS) have long supported programmability. The DNX line of switches – starting from Arad and continuing through the Jericho family – support programmability of</description>
      </item>
      <item>
         <title>Word on the Street: Media roundup for Trident 4</title>
         <link>https://www.broadcom.com/blog/media-roundup-for-trident-4</link>
         <guid>https://www.broadcom.com/blog/media-roundup-for-trident-4</guid>
         <pubDate>August 8, 2019</pubDate>
         <description>From Rick Merritt at EE Times: &quot;The Trident 4 family, which spans switches with 2- to 12.8-terabits-per-second aggregate bandwidth, is aimed at business networks that need a variety of management features. The 21 billion transistor chip packs up to 256 50G PAM4 SerDes and manages up to 5 billion packets per second in a single chassis. &quot;Trident 4 is pin-compatible with Tomahawk® 3, Broadcom’s 12.8T switch launched in late 2017 and aimed at hyperscalers. The two chips have similar power consumption and thermal requirements. &quot;The chip is among the first batch of 7-nm devices from Broadcom. A spokesman described the node as “a heavy lift” compared to 16 nm due to larger design databases and more stringent design and timing rules. However, it did deliver density and power improvements, he said.&quot; From Dan Meyer at SDx Central: &quot;Broadcom launched a new Trident chip that sports four-times the speed of its previous iteration and blurs the performance line with the firm’s high-end Tomahawk product. It could also play a more important role in positioning the vendor within the rapidly evolving chip market. &quot;Peter Del Vecchio, marketing manager at Broadcom, explained that the Trident 4 chip is targeted at enterprises that are looking to balance cost, performance, and programmability. He noted that the new chip does this by taking advantage of merchant silicon-based systems that have typically been the domain of hyperscaler-based products. &quot;The Trident 4 is built on a monolithic 7-nanometer (nm) architecture, which Del Vecchio said was the first of its kind in the segment. It can also scale from 2 Tb/s up to 12.8 Tb/s, which matches the performance of Broadcom’s current Tomahawk 3 chip targeted at the hyperscaler market. &quot;Del Vecchio said that speed allows customers to reduce their operational footprint from a multi-chip system onto a single</description>
      </item>
      <item>
         <title>Tips for Putting AIOps into Practice: What You Can Do Right Now</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/tips-for-putting-aiops-into-practice-what-you-can-do-right-now</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/tips-for-putting-aiops-into-practice-what-you-can-do-right-now</guid>
         <pubDate>January 16, 2019</pubDate>
         <description>You know why AIOps is useful in theory. But do you understand how to put AIOps into practice and reap real-world benefits? If not, this blog is for you. Although the field of AIOps remains relatively young, and AIOps-enabled tools are still developing, you can start taking steps now to organize your IT team and processes in ways that will allow you to take advantage of AIOps. AIOps in Theory At the theoretical level, the benefits of AIOps are easy to identify. AIOps lets IT Ops and DevOps teams use data, machine learning and artificial intelligence to automate tasks that would otherwise depend upon manual human intervention. As a simple example, an AIOps tool could identify a virtual server that is running out of disk space, then increase the virtual disk allocation automatically, without requiring a human IT engineer to recognize the problem and intervene. Similarly, an AIOps tool could trace a problem back to its root cause-an important function in today's complex, software-defined, fast-changing architectures, where surface-level issues can be difficult to interpret without delving into reams of data. Practical AIOps Planning How do you operationalize these and other theoretical use cases for AIOps? The answer, as noted above, has not yet been fully fleshed out because AIOps is still a developing field. Nonetheless, there are things you can do today to get ready to prepare for the possibilities of AIOps as AIOps-enabled tools enter into production. Identify Processes that Can Best Leverage AIOps An obvious first step in preparing to take advantage of AIOps is to identify the tools and processes that AIOps can help you improve. AIOps won't be able to help with every IT-related task at every organization. As an example, it probably won't do much to help you plan long-term IT staffing needs (although it</description>
      </item>
      <item>
         <title>Remove the Complexity of the New Software Defined Networking Stack</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/remove-the-complexity-of-the-new-software-defined-networking-stack</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/remove-the-complexity-of-the-new-software-defined-networking-stack</guid>
         <pubDate>October 10, 2018</pubDate>
         <description>Rewrite the assurance rulebook for success in the new software defined networking world. As organizations around the world adopt software defined networking (SDN) as a modern delivery strategy for the digital experience, they are building complicated networking stacks in their data centers as they invest in new network assets and technology – physical, virtual and software-defined. Multiple network layers breed complexity and complexity breeds bottlenecks. Just one bottleneck in any part of the new network stack can have a rapid impact on application performance and the customer experience. To decrease this complexity, organizations need to be able to see and assure every layer in the new network stack. They need to know when a new network function is spun up, its utilization and relationships and when it is decommissioned. Figure 1: One network operations monitoring experience is critical to the success of software defined networking deployments Peel back the software defined networking layers Although new self-service provisioning models, such as SDN and Network Functions Virtualization (NFV), increase flexibility, they also decrease visibility. Dynamic changes are harder to track. Problems are harder to find. And performance is harder to control. To realize the full benefits of SDN and NFV, organizations need to be able to peel back every layer of the supportive infrastructure, so they can understand, manage and assure what is happening underneath. Virtual network functions (VNF), like firewalls and load balancers, or content filters and optimizers, are critical links in the service chain and need to be permanently on the performance management map. Every one of these links to servicing the customer experience. For example, AT&amp;T is utilizing software defined networking to improve how they deliver a reliable experience to their customers. The need for effective software defined networking assurance across such complex and dynamically changing environments is becoming</description>
      </item>
      <item>
         <title>How Managed Service Providers and Enterprises Can Leverage CA UIM 9.0.2 to Segment Portions of Their Network</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/how-managed-service-providers-and-enterprises-can-leverage-ca-uim-9-0-2-to-segment-portions-of-their-network</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/how-managed-service-providers-and-enterprises-can-leverage-ca-uim-9-0-2-to-segment-portions-of-their-network</guid>
         <pubDate>December 11, 2018</pubDate>
         <description>For many enterprises and all managed service providers (MSPs), the ability to segment portions of the network is a fundamental business requirement. For enterprises, segmentation may be based on divisions, locations, or types of systems, i.e. operating systems. MSPs may use similar distinctions, but clearly keeping tenant data secure from other tenants is a vital requirement. Distributed enterprise and MSP environments that use CA Unified Infrastructure Management have multiple hubs and robots configured to serve different customers (tenants/accounts), each of which is considered an “origin”. In the new version of CA UIM, release 9.0.2, dedicated hubs and robots for each specific tenant or enterprise tenant are deployed and configured with unique origin names. All the devices and QoS metrics originating from these hubs/robots will contain origin information, which is used to classify and segregate data and views in UMP. Contact origins featured in CA UIM provide a way to manage multiple customers for an MSP or specific segments for an enterprise. With this feature, you can enable or disable user access to the resources based on an origin. As a MSP or enterprise systems administrator, you can globally modify the user-origin association of existing or new users by mapping them to specific origins. When this feature is enabled, systems administrators do not have access to any resources until you enable the pre-provisioned list of origins for them to manage. You can enable multiple origins for CA UIM account users We expose various APIs to implement this feature. The following APIs allow customers to execute the CRUD operations on the user that they want to map to an origin. How to Enable Contact Origins Users can enable contact origins (sub-tenancy) for their environment in three simple steps: Step 1: Create or modify the users in the UMP server. Account-origin setup: Map</description>
      </item>
      <item>
         <title>Preventing the Next &quot;Silent Hill' Horror with a New Model for APM</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/preventing-the-next-silent-hill-horror-with-a-new-model-for-apm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/preventing-the-next-silent-hill-horror-with-a-new-model-for-apm</guid>
         <pubDate>April 23, 2018</pubDate>
         <description>As an Australian, I was interested to learn that we antipodeans hold the record for an underground coal fire that’s been burning the longest. Yep, the aptly named Burning Mountain has been ablaze for – wait for it – 6,000 years. As it turns out coal seam fires are incredibly common, having dire consequences for folks living in close proximity. In 2014, a fire 1,000 miles southwest of Burning Mountain spewed toxic gasses on the unfortunate townsfolk of Morwell for 45 days, while the one raging under Centralia, Pennsylvania in the US since 1962 has forced the town to become all but abandoned – and inspiration for the horror movie Silent Hill. For many of us in tech we have our own Silent Hill type systems. Creaking technologies underpinning our customer facing software applications. Analogous to coal fires, they wreak havoc on our digital strategies and the people needed to support them. Not least because folks spend most of their time constantly fighting fires and in toxic alert mode. Underground coal fires are tough suckers to control, but it doesn’t have to be this way in business technology. With advances in instrumentation and monitoring, every element contributing to performance across the tech stack (app to infrastructure; on-premise to cloud) should be made visible and managed in context of the business services they support. This all sounds captain obvious, but in practice it’s hard to achieve. Traditionally, IT operations teams have been organized in a stratum type fashion. One team to support each layer – the app, infrastructure, network and so on. Monitoring is aligned accordingly, with discrete tools dedicated to each layer of the stack. And the more components we add, the more tools we acquire – each adding to the seams of silo’d data. But when fires breakout these</description>
      </item>
      <item>
         <title>Does an Agent Impact Your App? How to Determine Your Agent Footprint</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/does-an-agent-impact-your-app-how-to-determine-your-agent-footprint</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/does-an-agent-impact-your-app-how-to-determine-your-agent-footprint</guid>
         <pubDate>February 13, 2018</pubDate>
         <description>Probably the most common consideration before deploying a monitoring agent into your application is “What is the agent footprint?” A widely understood concept in physics is the observer effect whereby the simple act of observing a situation alters that situation. In this case, the situation is your application and the observer is monitoring instrumentation such as a Application Performance Management (APM) agent. The goal of most monitoring is to provide adequate visibility without sacrificing quality, meaning without significant impact to app performance nor end user experience. So why do most vendors shy away from definitive agent footprint claims? Well, the reality is, it depends. Agent effect is dictated by several factors including your application architecture, platform, resource allocation, transaction load on the application and of course, the configuration of the monitoring. Let’s take an example of two teams with Tomcat applications. The first app team may choose typical, default monitoring while the second team opts for maximum profiling. You might assume the second team should expect more impact from the monitoring. It is certainly a possibility. However, let’s say the response times for app B average 3 seconds. That means, even if the monitoring adds say 100ms, it would be undetectable to end users; only apparent in millisecond granularity measurements by a monitoring tool. On the contrary, if app A was a very low latency app, averaging 100ms response times, introducing 50ms for monitoring would have a dramatic impact. Similar concepts apply for memory and CPU utilization impact. In short, both the application and the agent play a role in determining the footprint. A simple round of testing can help app teams determine the level of visibility necessary for effective monitoring and triage as well as determine if any corresponding impact is truly detectable, worthwhile or lost in the noise.</description>
      </item>
      <item>
         <title>Predictive Monitoring Tools for Software Defined Networking</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/predictive-monitoring-tools-for-software-defined-networking</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/predictive-monitoring-tools-for-software-defined-networking</guid>
         <pubDate>May 14, 2018</pubDate>
         <description>Full stack network monitoring and added synthetic insights validates pre€“production and production software defined networking deployments. The Challenge: Unpredictable Modern Network Architectures Deploying new services based on cloud and software defined networking architectures comes with a new level of complexity for network operations, as well as network monitoring software. Cloud and SDN enables infrastructures to become agile enough to feed consumer demand for constant access to applications, data and bandwidth, but network monitoring needs to become just as agile to discover, visualize, identify, scale and predict to meet the constant rate of change within these dynamic architectures. Additionally, service delivery on top of modern network infrastructures requires validation prior to and at the time of deployment via active testing of real–world traffic patterns, along with live monitoring across all the layers of the new software defined networking stack, identifying bottlenecks and validating service level agreements (SLAs), all while doing it from the end–user perspective. The Solution: Active Testing and Live Monitoring for Predictive Network Behavior Netrounds Active Testing, when used with Network Operations and Analytics from CA Technologies, offers a comprehensive and scalable real–time view of network behavior and end–to–end network service quality. The combined capabilities act as the umbrella full stack fault and performance monitoring system and monitors network, server and application performance using passive techniques. Netrounds adds active test results and continuous monitoring KPIs for network services to the overall network and service health picture. This award-winning combination of active service data and passive infrastructure performance monitoring is designed to create a comprehensive end–to–end view of network service quality from the end–users’ perspective, based on predictive network behavior as well as inventory, topology, faults, flow and packet analysis converted into actionable intelligence for network operations. Example Use Case: SD–WAN and WAN monitoring Figure 1: Active testing and monitoring</description>
      </item>
      <item>
         <title>PODCAST: SD-WAN Discussions with Jason Normandin, Broadcom Product Management</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/podcast-sd-wan-discussions-with-jason-normandin-broadcom-product-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/podcast-sd-wan-discussions-with-jason-normandin-broadcom-product-management</guid>
         <pubDate>January 30, 2019</pubDate>
         <description>







Jason Normandin has over 17 years of experience in the Network Performance and Fault monitoring industry. Focusing on User Experience, APIs and new technologies Jason drives to provide simplicity to complex technologies and insights into today's massive data repositories. Located in Massachusetts with his Wife and 2 Children Jason works for CA Technologies in the Framingham, MA site but often travels to meet and work with customers and industry experts.
</description>
      </item>
      <item>
         <title>CA Creates a Network Monitoring Picture Worth a Thousand Words</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/ca-creates-a-network-monitoring-picture-worth-a-thousand-words</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/ca-creates-a-network-monitoring-picture-worth-a-thousand-words</guid>
         <pubDate>March 31, 2019</pubDate>
         <description>CA's latest NetOps 19.1 release continues our portal unification and enables simple understanding of real-time device status based on alarm state from CA Spectrum via inventory, performance views, and context pages. Since the beginning of civilization, human kind has relied on illustrations and visuals to explain and better react to complex situations. The current era of connected self aware network devices with intelligent capabilities is no different and the same age old principle of visuals hold true . A visual context adds a layer of network monitoring simplicity on top of already complex network environments and as some have rightly said &quot;a picture is worth a thousand words&quot;. With the latest release of CA NetOps v19.1, we bring together a unified and logical view across fault, performance and flow for modern architectures like SDN, SD-WAN, Cisco Meraki and AWS, along with traditional networks to simplify your job of network monitoring to better serve your customers and users. We have given life to the powerful idea of improving simplicity by using illustrations and visuals at a device level. Now our NetOps Portal shows real time device status information based on alarm state from CA Spectrum. In addition to the real time status, network operators continue to have the ability to correlate network alarms and open - in context - performance and flow views with minimal clicks. Network operators can now drill down into granular details of devices and interfaces as per a fault remediation process. Network Operators can now easily navigate the NetOps Portal to view network monitoring alarms and see corresponding status of devices. As a network operator reviews an alarm, they also have the ability to drill down into a context page for that alarm where details of IP performance are surfaced. IP performance will showcase network performance and</description>
      </item>
      <item>
         <title>APM’s Evolution to AIOps</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/apm-s-evolution-to-aiops-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/apm-s-evolution-to-aiops-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>March 21, 2020</pubDate>
         <description>Application Performance Monitoring has been my career for 20 years. When our biggest customers have mission-critical production problems, I've been in the war rooms and on the midnight bridge calls, ensuring that teams are able to determine the root cause to fix the problem. I always questioned: why did I have to be on that call? Or get on a redeye to be at the client's doorstep the next morning? The answer is simple, application architectures are complex and only getting more complex overtime. The skills required to triage and diagnose production issues are based on tribal knowledge and require experts to manually apply their experience to establish root cause. Traditional monitoring solutions require expert human intervention and do not have the smarts to proactively mine the data, automatically surface actionable insights or take self-healing actions. But has technology advanced to a stage where we can now teach systems to automatically triage and diagnose complex application problems? Can we put the expert in a box? Can this expert in a box learn and take remedial action without human intervention? It may sound like science fiction, but I thought the same of Star Trek in the eighties and today some of those farfetched ideas are a reality. With today's advancements in low cost computing and data science we have the foundation to apply the smarts and put the €˜expert in a box'. We are ready to make the shift from human driven &quot;ITOps&quot; to machine driven &quot;AIOps&quot;. Beam me up Scotty! The Evolution of Application Architectures My career has spanned three major waves in the application performance monitoring space. As shown in Figure 1 the three waves line up nicely with how engineering disciplines have evolved overtime from using Waterfall to Agile and now DevOps methodology. Figure 1: The Three Waves</description>
      </item>
      <item>
         <title>Monitoring Tools for Healthy Network Relationships</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/monitoring-tools-for-healthy-network-relationships</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/monitoring-tools-for-healthy-network-relationships</guid>
         <pubDate>March 14, 2018</pubDate>
         <description>Did you know? CA Spectrum can utilize multiple data sources to map topology. Getting alerts on network health is always crucial to uptime and the customer experience. But visualizing network deployments and interconnections via topology and relationship mapping in monitoring tools is just as important for NetOps to understand the health of operations. In addition to the Address Resolution Protocol (ARP), CA Spectrum utilizes data generated from neighbor discovery protocols like Cisco Discovery Protocol (CDP) and Link Layer Discovery Protocol (LLDP) to accurately discover and map the physical relationships between devices as a fundamental capability of this network monitoring software solution. Given that most networks have some quantity of Cisco devices: Did you know? Cisco devices generally have CDP enabled and LLDP disabled out of the box. Best practice is to enable CDP and LLDP globally but disable it on Internet/untrusted interfaces. Check out the Cisco hardening guide for more information. Did you know? VMware supports both CDP and LLDP. While CDP is available to both a standard and distributed switch, LLDP is only available in distributed switches. Check out this Utilizing CDP and LLDP with vSphere Networking article for more information. CA Spectrum, as a network discovery tool and much more, only leverages CDP information from Cisco devices and requires both of the Cisco devices to provide correlating connectivity information within this monitoring tool. In short, CA Spectrum does not leverage CDP information that may be exposed by other vendors like VMware, Riverbed and others for topology mapping. CDP and LLDP within monitoring tools CDP is a device discovery protocol that runs over the data-link layer (Layer 2) on all Cisco devices (routers, bridges, access servers and switches). CDP allows monitoring tools to automatically discover and learn about other Cisco devices that are connected to the network. To permit</description>
      </item>
      <item>
         <title>Summer Blockbusters and Demystifying the AI in AIOps - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/summer-blockbusters-and-demystifying-the-ai-in-aiops-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/summer-blockbusters-and-demystifying-the-ai-in-aiops-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>July 5, 2018</pubDate>
         <description>Learn more about AIOps during blockbuster movie season In a previous blog, I mused about whether the air conditioner in my car had AI and the parallels for AI in both self-driving cars and AIOps, or self-driving IT Operations. While the AI that enables self-driving cars will provide many benefits and is being heralded as game changing, when it comes to AI in blockbuster movies, there's one word to describe how it is commonly regarded: feared. Indeed, a high number of films that reference AI are dystopian worlds where something has gone a bit awry with the AI. Who can forget Skynet which brought us various models of Terminators? To bring this back to the AI for self-driving cars, the movie version would likely give the car a personality similar to Steven King's Christine! Often, movies represent the greater fears in society and fear of new things can be caused by a lack of understanding of them. Which brings me back again to AIOps. It's a relatively new term and you may be wary of it. Let's demystify it. The best way to do that is to understand it better. The good news is that the recent AIOps Virtual Summit is now available for on-demand replay. This summit provides a blockbuster level of practical content on AIOps. Topics include: The AI Advantage: How to Put the Artificial Intelligence Revolution to Work For IT Operations Achieving Autonomous Operations through AI and Machine Learning AIOps: Augmenting Humans with Artificial Intelligence Creating Autonomous Intelligent Infrastructures How AI is Helping Site Reliability Engineers Automate Incident Response Taming Cloud and Container Chaos with AI Ops The free event also includes demos of how CA products are enabling AIOps today. So grab some popcorn, maybe some candy and a giant beverage tub and sit back and</description>
      </item>
      <item>
         <title> How AI, machine learning and analytics are reshaping the future of IT Operations</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/-how-ai-machine-learning-and-analytics-are-reshaping-the-future-of-it-operations</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/-how-ai-machine-learning-and-analytics-are-reshaping-the-future-of-it-operations</guid>
         <pubDate>June 6, 2018</pubDate>
         <description>Insights from 100+ IT Operations professionals. In today's economy where every business is in the software business, downtime is costly, and slow is the new down. Proactively managing and improving experience of modern applications, cloud or traditional infrastructures and networks is a necessity but it's not easy. AI machine learning is influencing and shaping the future of IT operations. By using advanced algorithms and artificial intelligence we can increase and improve various IT and business operations tools and improve user experience. How can AIOps address key IT monitoring challenges that IT professionals face today? Recently, we conducted a survey of more than 100 IT professionals on AIOps (Artificial Intelligence for IT Operations), machine learning and analytics in conjunction with TechValidate. Through this survey, we have gained a clear picture of the challenges our customers are faced with and what they hope to gain from leveraging these technologies moving forward. Let's recap some of our findings: Alert noise is our customers' #1 pain point Of those surveyed, over 70% identified alert correlation and proactive issue detection as the two biggest challenges they face. 90% of customers receive more than 10,000 alerts per month and 72% are currently using up to 9 IT monitoring tools. AIOps can help reduce the noise AIOps can help reduce the impact of these issues, with a reduction in downtime, IT monitoring tool sprawl, and time spent by IT professionals analyzing alerts. According to our survey results, predictive analytics is the most desired AIOps capability with faster remediation being the number one perceived benefit of AIOps. Predictive analytics reduces alert noise = faster remediation The future of AI Ops is bright and one that can eliminate several hurdles teams are currently facing. AIOps can help drive faster root cause analysis, identify and predict capacity bottlenecks and easily</description>
      </item>
      <item>
         <title>How to Spot and Resolve API Performance Issues</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/how-to-spot-and-resolve-api-performance-issues</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/how-to-spot-and-resolve-api-performance-issues</guid>
         <pubDate>May 11, 2018</pubDate>
         <description>APIs create the connections that drive today's digital business. They are the communication pathways that allow for the flow of application components and data, enabling companies to quickly build new apps and features and integrate with 3 party services to launch new offerings. Outside of traditional web apps or mobile apps, APIs are also the driving force behind the Internet of Things where they will facilitate the connection to over 75 million connected devices by 2025. Because APIs are such a critical part of modern applications, they can also pose a big risk. If an API isn't functioning properly, it can have a substantial impact on the performance of your application, the user experience and ultimately, the bottom line. API issues can affect a company of any size, from the technology giants, like Facebook, to the app developer building services around census data. Developing an API monitoring practice sooner rather later will be beneficial as the number of devices, apps and services connected through APIs is only set grow. Speeding Resolution of API Issues Ensuring that APIs are performing as expected and having the ability to quickly resolve any issues that occur starts with synthetic monitoring. Synthetic API monitoring allows you to continuously monitor all your internal APIs and the 3rd party APIs your app depends on. Synthetic API Monitoring solutions, like Runscope, provide a 24/7 global view of API performance, uptime and data integrity, enabling you to spot API issues in real-time. Once a problem is detected, you need the ability to understand what caused the issue. If your APIs are returning the wrong data, your synthetic tool will pick this up and developers can fix the issue, but if it's a performance problem, you need the ability to tie your synthetic checks to backend systems. An Application Performance</description>
      </item>
      <item>
         <title>Integrating Service Desk Management Tools With IT Infrastructure Monitoring Solution</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/integrating-service-desk-management-tools-with-it-infrastructure-monitoring-solution</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/integrating-service-desk-management-tools-with-it-infrastructure-monitoring-solution</guid>
         <pubDate>March 29, 2018</pubDate>
         <description>Over the course of the last few years, we have provided or created integrations for our award winning, IT monitoring solution CA Unified Infrastructure Management (CA UIM) with several different Service Desk Management tools. Making use of this experience, we have identified a set of use cases that fit most requirements for these integrations. A typical integration will have the following steps: Create a ticket from a given alarm Update or close the ticket when the alarm is cleared Close (acknowledge) the alarm when the ticket is closed Optionally, changes to the alarm status (e.g. changing severity) can be reflected in the ticket, for example with a note. A mapping of alarm IDs to ticket IDs needs to be maintained. While an in-memory mapping is sufficient for most of the requirements, this will not be persistent across probe restarts. Therefor we maintain the mapping in a simple database Ticket Creation Usually the first step when discussing integration, is the creation of a ticket (or incident) in the service management system. There is a balance between the risk of a ticket storm if automatic ticket creation is enabled, and of delay and extra manual effort if user interaction is used to trigger the process. To balance these requirements, then the preferred trigger point is the assigning of an alarm to a designated user. While this will not fit every workflow, it provides a good balance between the competing requirements. Initially the assigning of the alarm will be a manual exercise, but as confidence grows then Auto-Operator rules can be put in place to assign alarms with defined attributes to the correct user. We need to maintain a mapping of alarms to Tickets for further processing. To improve usability, information about the ticket can be saved in the Alarm – either as</description>
      </item>
      <item>
         <title>Modern Network Performance Monitoring for the Modern Age</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/modern-network-performance-monitoring-for-the-modern-age</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/modern-network-performance-monitoring-for-the-modern-age</guid>
         <pubDate>August 23, 2018</pubDate>
         <description>CA Performance Management, which is part of the CA Network Operations and Analytics platform, is an award winning network performance monitoring tool built for modern networking environments. I've been working with customers for over a year to help them adopt CA Performance Management. I've seen the most success when customers not only replace their tool, but when they also update their process, therefore getting the most value out of what they have. In this time, I've heard various gratifying sentiments that exemplify the value that CA Performance Management brings customers. &quot;This will save me a week and a half of work every month.&quot; Several months ago, I was on site with an MSP helping them with enablement training and best practices for CA Performance Management. It was a small team of a much larger company, and they had a small group of engineers supporting a dozen customers as a dedicated ISP. To show the performance of the network, they sent CA eHealth reports to their customer. One of the engineers had been spending a significant amount of time each month compiling the information from the various reports generated in CA eHealth. When I showed him how the network performance monitoring dashboards in CA Performance Management could be used as a template to generate PDF reports, he realized that he'd now be able to easily script all the work he'd been doing to build out his reports. &quot;CA Performance Management is the only tool that works at our scale and has a mature set of APIs for automation and data extraction.&quot; One large enterprise I worked with was very open with their vendor evaluation process. They were assessing several competitive offerings and conducting their own trials internally. They eventually settled on moving to CA Performance Management. When I asked about their</description>
      </item>
      <item>
         <title>Software Defined Networking Shouldn't Be As Scary As it Seems - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/software-defined-networking-shouldn-t-be-as-scary-as-it-seems-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/software-defined-networking-shouldn-t-be-as-scary-as-it-seems-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>June 1, 2018</pubDate>
         <description>With the popularity of software defined networking (SDN), today’s data centers are increasingly becoming home to hidden entities: the LPAR, the virtual firewall, the software-defined router. We know they are there; we just can’t see them. The shift from visible devices to invisible pools of resources requires organizations to rethink how they manage the entire data center – from configuration processes and staff skillsets to monitoring tools and performance metrics. In this new world of dynamic data centers, just one mistaken click of the mouse can result in 100 firewalls being deployed instead of the intended 10. It’s a scary prospect – not just for network engineers and operators, but also business stakeholders. An error behind the closed doors of the data center can quickly cascade all the way to the customer. Forget the scary stories of software defined networking Despite some fear that might be out there with implementing next-generation technologies, the industry is seeing an uptake in adoption rates for software-defined networking and network functions virtualization (NFV). Analyst firm ACG Research reported that service providers’ adoption of SDN in their data centers increased 75% recently, and project deployment of NFV is expected to increase at a compound rate of 44% per year between now and 2020. According to Paul Parker-Johnson, Principal Analyst at ACG Research, “Sometimes the prospect of implementing a virtualized service delivery platform can be daunting.” He adds that “The tasks are numerous, the mix of skills is diverse, and the answers on how to integrate solutions into deployments and operations require hard work and collaboration to resolve.” One of the most important tasks that needs to be addressed is day-to-day assurance of the new network. Paul Parker-Johnson stresses that: &quot;SDN and NFV will reshape conventional network designs and introduce the need for new management and</description>
      </item>
      <item>
         <title>CA Technologies Partners With StreamWeaver for Third-Party Data Integration</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/ca-technologies-partners-with-streamweaver-for-third-party-data-integration</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/ca-technologies-partners-with-streamweaver-for-third-party-data-integration</guid>
         <pubDate>November 1, 2018</pubDate>
         <description>Third-party data integration hasn't always been easy. In the past, it could take months, requiring large financial investments and forcing developers into low-value integration coding. With the advent of open source technologies, APIs, and analytics, there's a new approach to help solve this issue. CA Technologies is extremely excited to announce a partnership with StreamWeaver, an integration automation company that gives IT organizations the ability to connect their ITOM tools and systems quickly and painlessly. CA will be using StreamWeaver to bring in third-party operations data into CA Operational Intelligence, a core component of CA's AIOps platform. CA Operational Intelligence is an advanced analytics solution in the market that provides users with comprehensive insights by ingesting data across performance monitoring tools and third-party sources. This data is put into a single, extensive data lake, eliminating the need for IT Ops teams to create and maintain a central data repository. Machine learning-driven analytics then give insights to users on metric, event, topology, text, and log data. Users can use this information to deliver a phenomenal user experience to their customers, improve service quality and drive operational efficiencies. At no additional cost, customers will be able to integrate StreamWeaver with CA Operational Intelligence, eliminating the difficulty it generally takes to access normalized operations data across a company's entire IT domain. By making it easier to integrate with third party data resources, users will not only maximize the value of their CA Operational Intelligence investment, but they will also increase the efficiency with which CA's ML powered algorithms proactively resolve incidents. As CA General Manager of Agile Operations, Ali Siddiqui, explains, &quot;The addition of StreamWeaver augments the value of CA Operational Intelligence by ensuring third-party operations data can be consumed by its core machine-learning algorithms. With this integration we will be able to</description>
      </item>
      <item>
         <title>How to Choose the Right SD-WAN Path for Your Critical Applications</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/how-to-choose-the-right-sd-wan-path-for-your-critical-applications</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/how-to-choose-the-right-sd-wan-path-for-your-critical-applications</guid>
         <pubDate>October 22, 2018</pubDate>
         <description>Guaranteeing a reliable application experience for your customers means adopting modern SD-WAN monitoring tools. Think about it - we are constantly evaluating how to be more efficient in our lives. Especially when we need to get somewhere. Apps nowadays can let us know when it's time to leave for our destination, how long it will take, the most effective route as well as obstacles in our way that could delay our arrival. We rely on these tools now to make our busy lives more manageable. Today's enterprises are implementing new tools to help them deliver business critical applications along the most efficient path as well. SD-WAN (software-defined wide area network) technologies promise improved application performance while lowering costs but measuring the performance of how your application is delivered along these new routes is key to successful deployments and happy end users. What is an SD-WAN application path? Basically, it is simple views into monitoring how your WAN infrastructure is delivering the application experience to your customers compared to your set service level agreements (SLAs). How does this relate to the apps we use today to get us where we need to go in the most efficient way? Let's look at a common use case Suppose you are a food distributor shipping meat, fish and wine out to local grocery stores. In order to be successful and have happy customers, you need to evaluate the various routes you could potential take along with the costs and time it takes to travel those routes to reach your destination for each product. Figure 1: Shipping over residential roads, the highway and toll roads all present different costs and hazards to evaluate in order to find the most efficient route to deliver your products to your customers. Taking residential streets could help you avoid</description>
      </item>
      <item>
         <title>CA APM Now Provides Proactive Monitoring for Red Hat OpenShift</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/ca-apm-now-provides-proactive-monitoring-for-red-hat-openshift</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/ca-apm-now-provides-proactive-monitoring-for-red-hat-openshift</guid>
         <pubDate>June 4, 2018</pubDate>
         <description>As cloud platforms such as OpenShift gain traction in the enterprise, traditional monitoring approaches that once worked well for monolithic application architectures have become ineffective due to the increased complexity and change. What's now needed are modern approaches that are enabled to keep track of application performance across these dynamic environments without overburdening teams with complex maps and manual thresholding that just generate noise and false-alarms. At CA, we have built our Application Performance Management (APM) solution to help provide immediate performance insight into applications and microservices deployed and managed by OpenShift and are proud to be a Red Hat Certified Technology Partner. &quot;Powered by AIOps and machine learning, CA's solution compliments the RedHat OpenShift platform by bringing in the necessary businesses insights into application performance and infra-structure health to help customers build better apps faster and achieve successful digital trans-formation.&quot; €“ Sushil Kumar, SVP of Product Management, CA Technologies OpenShift Monitoring with CA APM - Put an End to Alert Fatigue CA APM for OpenShift can help you ensure app performance and deliver a positive customer experience by providing complete visibility into each layer of your environment, from app-to-infrastructure. With agentless configuration options and dynamic monitoring, CA APM automatically detects and maps new OpenShift clusters and projects €“ so you no longer have to deal with time-consuming configurations or instrumentations. CA APM App-to-Infra Layers With experience and role-based management options (Universes), you can easily simplify complex topology maps into role-based views allowing teams to quickly see the data relevant to them. CA APM Universes CA APM also uses advances analytics and machine learning to predict problems and anomalies in your environment while reducing alert false-positives, so your teams can quickly remediate issues before the customer experience is impacted. CA APM Experience View and Analysis Notebook Most importantly, CA APM</description>
      </item>
      <item>
         <title>Reduce Complexity By Enabling Alarm Based Policy Configurations in CA UIM 9.0.2</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/reduce-complexity-by-enabling-alarm-based-policy-configurations-in-ca-uim-9-0-2</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/reduce-complexity-by-enabling-alarm-based-policy-configurations-in-ca-uim-9-0-2</guid>
         <pubDate>January 4, 2019</pubDate>
         <description>CA Unified Infrastructure Management (CA UIM), a leading infrastructure monitoring solution, provides out of the box monitoring for over 200 technologies through monitoring probes. Traditionally, CA UIM administrators need to know which probe is monitoring which device, and then set thresholds at each probe level. With CA UIM 9.0.2, you can now speed up your monitoring deployments and reduce complexity by enabling alarm based policy configurations against multiple metrics, regardless of the technology being monitored. This allows you to unify and centralize configurations of thresholds, messages and actions. An alarm policy provides you a unified interface where you can select a device, group or monitoring technology. Once you have made this selection, you can define specific thresholds which could be an immediate (static or dynamic) threshold or a time over threshold. This feature allows you to: View a list of alarm policies Create or edit alarm policies Build conditions that triggers an alarm Build conditions to monitor a particular device, a group of devices, or a specific monitoring technology (such as Docker) Configure the time over threshold alarm to reduce alarm noise to an actionable level. Customize alarm messages to provide the information you need Creating An Alarm Policy Creating an alarm policy is a two-step process. First, you create an enhanced profile in the Unified Management Portal (UMP) which enables metric collection. Once metric collection starts, you can create an alarm policy in the Operator Console. Step One: In UMP, create an enhanced monitoring profile with metric collection enabled. Step 2: Create an alarm policy in the operator console. In the alarm policy condition wizard, you can select: Device: Allows you to monitor the state or performance metrics of a device. To configure an alarm condition for a device, select a device name, the metric, and the component that</description>
      </item>
      <item>
         <title>Visibility into MetroE Health from You Network Monitoring Application</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/visibility-into-metroe-health-from-you-network-monitoring-application</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/visibility-into-metroe-health-from-you-network-monitoring-application</guid>
         <pubDate>April 30, 2018</pubDate>
         <description>Did you know? You can monitor MetroE links with CA Spectrum's network monitoring application. Metro Ethernet (&quot;MetroE&quot;) is a networking technology often used to connect customer networks. Because it is based on the Ethernet standard, MetroE offers lower cost and simpler implementation than other networking technologies of equivalent bandwidth. As a network engineer with experience with several network monitoring applications, I've found that many of my colleagues and customers in network management are less familiar with MetroE than other technologies. Cisco has done a fabulous job of providing more information that I have leveraged below (and referenced at the end). I also include a brief section on how CA Spectrum can provide visibility into this network service option. Figure 1: A simplified view of a point-to-point MetroE service The two customer sites above are connected by means of a Metro Ethernet Network (MEN) maintained by a service provider. A CPE (customer premise equipment) device in each customer site is defined as the customer-side interface to the MEN. A UNI (user network interface) is defined as the dividing line between each customer site and the MEN. Each UNI is dedicated to a single customer site. An interface on a PE (provider edge) device is designated as the UNI. Within the MEN, an EVC (Ethernet virtual circuit) provides connectivity between the UNIs. The EVC ensures that the UNIs communicate only with each other and with no other devices in the MEN. MetroE Management Options for Network Monitoring Applications Ethernet Local Management Interface (E-LMI): (Customer Focused) Similar to its counterpart in Frame Relay, this protocol was developed by the Metro Ethernet Forum. It operates on the link between the CE device and the PE device. E-LMI automates provisioning of the CE device. On-going fault notification (as detected by 802.1ag) to the CE device</description>
      </item>
      <item>
         <title>CA Integrates NetFlow Data to Expand Its NetOps Portal Views</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/ca-integrates-netflow-data-to-expand-its-netops-portal-views</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/ca-integrates-netflow-data-to-expand-its-netops-portal-views</guid>
         <pubDate>April 13, 2019</pubDate>
         <description>CA's latest network monitoring release streamlines access to key NetFlow data including protocol, host and type of service (ToS); in context and expands your operational awareness in the NetOps Portal. In the recent CA NetOps v19.1, release, we continue to unify our comprehensive network monitoring into one NetOps Portal (aka CA Performance Center) and are now proud to add network flow data from CA Network Flow Analysis to already unified fault and performance monitoring dashboards from CA Spectrum and CA Performance Management, respectively. In the NetOps Portal, our &quot;headless NFA&quot; provides detailed flow views and reports and ensures users don't have to toggle across additional windows or tool sets. Easy workflows from Alarms to Performance to Flow is now seamless and eliminates any learning curves which existed to due different management consoles. Why is flow important to understanding network fault? The interface may be saturated, but why? You know the network is overburdened, but where is the spike coming from? CA's flow analysis breaks down interface utilization to reveal the fingerprint of that suspect traffic. With network monitoring in NetOps 19.1, you can get that information in context with group, site, and time metrics as well. To illustrate this, let's start with the new Network Interface Performance dashboard. This dashboard provides at-a-glance visibility into high level flow data based on top flow volume and our customers can use in context drill down options to view details for any IP interface. This high-level view is similar to what was provided in the Enterprise Overview in past versions of CA Performance Center, but instead of directing users to the legacy CA Network Flow Analysis console, selecting a host or protocol in the table now narrows the data in the adjoining table to data for the relevant interfaces without redirecting users to a</description>
      </item>
      <item>
         <title>The Latest Release of CA Unified Infrastructure Management Brings Easier Deployment, More Coverage, and Auto-Remediation</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/the-latest-release-of-ca-unified-infrastructure-management-brings-easier-deployment-more-coverage-and-auto-remediation</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/the-latest-release-of-ca-unified-infrastructure-management-brings-easier-deployment-more-coverage-and-auto-remediation</guid>
         <pubDate>November 14, 2018</pubDate>
         <description>CA Technologies is excited to announce the release of CA Unified Infrastructure Management 9.0.2. CA UIM is the only solution that provides AI-driven analytics, comprehensive coverage of more than 200 technologies, and an open, extensible architecture. Check out the six biggest features that will come with this latest release. Stronger Security &amp; Compliance TLS, which stands for &quot;Transport Security Layer&quot; is a protocol that allows digital services to communicate over the internet securely. UIM 9.0.2 will provide hardened security compliance with TLS 1.2 to ensure encryption across the communication between the database layer and 30+ Infrastructure probes. This support lets the UIM Server establish secure communication with the UIM database, without compromising on the product performance. UIM 9.0.2 will also begin to support Microsoft SQL Server 2017 and Oracle database 12c. Modern, Intuitive User Interface CA Unified Infrastructure Management 9.0.2 will bring an enhanced user interface that will give more visibility to all technologies being monitored. The Operator Console has been built using HTML5, creating a UI with smarter user experience workflows, richer out of box dashboards, and ad-hoc reporting capabilities. Policy Based Alarm Configuration You can now speed up your monitoring deployments and reduce complexity by enabling policy alarms against multiple metrics, regardless of the technology being monitored. This allows you to unify and centralize configurations of thresholds, messages, and actions. Generic REST API based Monitoring CA Unified Infrastructure Management can monitor a wide variety of different cloud tools, such as Amazon Web Services, Azure, and Nutanix. One cloud tool that isn't currently on this list is the Google Cloud Platform. CA Unified Infrastructure Management 9.0.2 rectifies this issue by allowing users to leverage RESTMon probes to monitor any service, system, or application that you might not see on our integrations page. Web-Hooks/Auto-Remediation With CA UIM 9.0.2, you can</description>
      </item>
      <item>
         <title>KPN shares how AIOps is helping breakdown DevOps silos</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/kpn-shares-how-aiops-is-helping-breakdown-devops-silos</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/kpn-shares-how-aiops-is-helping-breakdown-devops-silos</guid>
         <pubDate>June 8, 2018</pubDate>
         <description>How AIOps is impacting the future of IT Ops at a leading Dutch telecommunications company. We recently caught up with Arnold Hoogerwerf, Chief Product Owner Software Tooling at KPN to learn more about how AIOps is impacting the future of IT Ops at this leading Dutch telecommunications company. In the Q&amp;A below, Arnold shared with us how AIOps is helping breakdown DevOps silos and providing insight into the business process as a whole. KPN deals with such large volumes of digital data that humans are hardly capable anymore to analyze it without the help of technology. This is where AI and machine learning come into play by helping analyze huge amounts of current and historical data, not only from the affected environment but also from related environments. AIOps helps drill down to the probable cause of the problem much faster than most humans. In a perfect world it could even warn for an event that normally would turn into a problem. Why traditional approaches to IT Ops monitoring aren't working anymore? We are moving into the era of DevOps, in which 'Dev' means 'Fail Fast' and 'Ops' means 'Fail Never'. For this reason, we have a tremendous need for data since we are afraid we are going to fail if we don't know all the facts. In fact, we generate so much data nowadays that we humans are hardly capable anymore to analyze it without the help of technology. Yet, we need to be careful of this relentless focus on data and statistics. Everything might become a statistic. In fact, that's already the case. You get search results, news, offers and so on, based on what everyone else with the same characteristics search for, read or buy. In IT Ops this could be dangerous. Is a computer very busy because</description>
      </item>
      <item>
         <title>The Importance of Monitoring the User Journey</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/the-importance-of-monitoring-the-user-journey</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/the-importance-of-monitoring-the-user-journey</guid>
         <pubDate>March 10, 2018</pubDate>
         <description>Application performance monitoring practices are evolving quickly. More data is gathered for analysis now than ever before, and as new monitoring trends appear, integrating, normalizing and analyzing the increasingly complex data in context becomes more important than ever. Not so many years ago it used to be good enough that the app was available. This meant getting simple performance metrics, uptime trends and alerts when things went wrong. Considering the amount of metrics coming in from the millions of users accessing portals every day, this was no simple task, but with good practices and the right solutions, problems were fixed swiftly to ensure uptime and functionality. Traditionally, this approach required specialized teams and tools. For example, a dedicated team would look after the storage cluster and the backend app servers, another team supported customers when issues occurred, and a third team investigated usage metrics to optimize app design and performance. To achieve good monitoring practices, a host of monitoring, reporting, alerting, service desk and other solutions are leveraged to make sure that the flow of discovering and fixing issues is seamless. Today, this approach is no longer enough to stay competitive. Application performance monitoring needs a new model. From performance metrics to customer experience metrics App monitoring metrics are also evolving. The current, most common performance metrics collected from apps – such as error rates, average latency and availability – are mostly consumed by a limited group of people: the developers, testers, support personnel and app designers. Theses metrics are still important, but to really step into the shoes of a customer, more complex indicators are needed. Let’s look at one APM metric – URL performance. To know that a URL delivers a web page quickly and reliably was, and still is, essential. URL functionality can be secured by looking</description>
      </item>
      <item>
         <title>Beat the summer heat with AIOps!</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/beat-the-summer-heat-with-aiops</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/beat-the-summer-heat-with-aiops</guid>
         <pubDate>June 28, 2018</pubDate>
         <description>Find out how to stay cool with the AIOps Virtual Summit The summer heat is upon us in the northern hemisphere. It's hot out there! I'm grateful for the artificial intelligence (AI) in my car that controls the air conditioner and keeps things nice and cool, but not too cold. No, my car does not have a HAL 9000 onboard, but it is able to sense the indoor and outdoor temperatures as well as level of sunlight to automatically adjust the coolness of the air and level of air flow according to my preset temperature preference. Sounds fancy, but wait, this isn't quite AI, is it? It seems like a smart feature, to be sure, but doesn't rise to the level of AI. But if a car can drive itself €¦ AI is certainly a valid label to attribute to the capabilities that make this happen. Similarly, in the world of IT Operations Management Solutions, AIOps is increasingly used to describe new intelligent capabilities that aid IT Operations. And, just as a simple automatic feature in an automobile might not be considered to be on the level of the artificial intelligence needed for a self-driving car, there is a span of capabilities that culminate into full AIOps. During the recent AIOps Virtual Summit, now available for on-demand replay, Ashok Reddy, Group General Manager, DevOps at CA Technologies, outlined the capabilities/ levels of self-driving cars relative to self-driving IT Ops, from manual to fully autonomous. For the journey to fully autonomous operations/AIOps, it begins at level zero with manual reports and analysis. We are all familiar with this (unfortunately!). This fully manual level relates to a car without automation and manual steering, but perhaps with an air conditioner that regulates itself. Next is anomaly detection and algorithmic noise reduction where IT</description>
      </item>
      <item>
         <title>Is Your Website Meeting Customer Expectations? - Start Monitoring Today</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/is-your-website-meeting-customer-expectations-start-monitoring-today</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/is-your-website-meeting-customer-expectations-start-monitoring-today</guid>
         <pubDate>July 24, 2018</pubDate>
         <description>How to achieve an effective website performance monitoring strategy It’s no surprise that a poorly performing website can have massive impacts on your brand and customer loyalty, but to me what’s truly shocking is the number of organizations who are lacking the visibility needed to truly meet customer expectations. According to a recent study conducted by Vanson Bourne, 90% of businesses feel they lack insight into their customer experience and 93% feel they could improve the way they measure customer experience today. To keep customers happy and deliver a positive experience, monitoring website performance is critical. But like most things, this is easier said than done. While the web itself is a fairly well-established platform, today’s sites are evolving to incorporate more modern technologies and are becoming increasingly dynamic and complex – leading to many potential causes for performance degradation. Not all websites are alike which means not all issues are alike either. Performance problems can come in all shapes and sizes and could be a result of JavaScript issues, too many HTTP requests, inefficient code, image formatting, third-party plugin issues – the list goes on. The one commonalty being that these issues all cause the customer experience to suffer and all likely could have been prevented. To properly stay in tune to these potential issues and outages, you not only need to have the right tools in place, but you also need to ensure you’re monitoring the right set of KPIs. Some of the most common include: Uptime – the time your site is available. When measuring uptime, it’s essential that you measure across your entire site, not just one page. Page load time – the time it takes for the entire page to load completely. This is important because a user will likely only tolerate a delay of</description>
      </item>
      <item>
         <title>Getting Value Out of the IT Operations Big Data Gold Mine</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/getting-value-out-of-the-it-operations-big-data-gold-mine</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/getting-value-out-of-the-it-operations-big-data-gold-mine</guid>
         <pubDate>August 28, 2018</pubDate>
         <description>Detect and isolate operational issues, then prescribe and predict with the help of CA Digital Operational Intelligence &quot;How much data does your business generate?&quot; - &quot;Way too much, several Terabytes per day.&quot; &quot;What do you do with that data?&quot; - &quot;Some we discard, some we store in case it's useful.&quot; This is a common conversation I have with customers that have embraced the application economy. Microservices, containers, cloud services, SDN, social media, the list goes on... are just some examples of these non-stop data sources of metrics, alarms and logs. Yes, the amount of data is overwhelming, but it is also a real Gold Mine for Data Scientists and Artificial Intelligence. Some of the big players of the market have already realized its value and how to monetize it. All organizations know that this Big Data hides evidences that could prevent most of their operational issues, they just need to smell the smoke before seeing the fire. How to Get Value Out of the Chaos The silo approach has been a major issue in most organizations. I have heard this topic since the beginning of my career, &quot;Data correlation is hard to achieve when data is spread in isolated repositories.&quot;. Hence it is important to leverage a Data Lake repository where data is stored in its natural format. The Data Lake will be the gold mine where all the raw data lives, ready to be processed by AI algorithms. Data Lakes are the beef of Data Scientist and Data Developers. Some of the key characteristics of Data Lakes are: Easy to import/export data: Real time ingestion/extraction APIs should be available. Secure: Data must be secure at rest and in transient (to be compliant with GDPR). Accessibility: Machine Learning and analytics needs easy access to raw data to product Insights. A</description>
      </item>
      <item>
         <title>Why Point, Reactive Monitoring &amp; Automation Don't Work for Today's Digital Businesses</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/why-point-reactive-monitoring-automation-don-t-work-for-today-s-digital-businesses</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/why-point-reactive-monitoring-automation-don-t-work-for-today-s-digital-businesses</guid>
         <pubDate>October 17, 2018</pubDate>
         <description>Across industries and markets, personal interactions continue to be supplanted by the digital. Now, applications are where battles for customer loyalty can be won or lost. In the digital economy, it’s application quality that separates market victors from laggards. For today’s businesses, there’s a premium on delivering optimized user experiences—all the time and every time. While optimizing service levels and experience is critical, it seems to be getting more challenging to do every day. Increasing Complexity. Most enterprise-class business services now rely not only on traditional systems, including on-premises mainframes and distributed systems, but on a plethora of new, dynamic technologies, such as containers, cloud delivery models, virtual and software-defined components and more. Increasing Scale. The volume, variety and velocity of data that needs to be managed, correlated and analyzed continues to grow dramatically. In the wake of initiatives like multi-cloud deployments, microservices development and Internet of Things (IoT) implementations, teams continue to see explosive growth in the operational data being generated. Ultimately, internal team members simply can’t keep pace. Reactive, disjointed tools fuel more complexity Exacerbating matters is that, as IT teams looked to manage their increasingly diverse environments, they’ve had to add more point monitoring tools and automation capabilities to the mix. These disjointed tool sets compound the complexity and challenges: Point monitoring tools result in reactive issue identification and alert fatigue. Working with dozens of tools, teams struggle with hundreds of thousands of alerts that feature a high rate of inaccuracy and redundancy. Lacking unified visibility that spans their hybrid environments, staff spend too much time inspecting various systems and domains in order to identify the root cause of issues. As a result, customer experience suffers while triage calls run for hours. Point automation capabilities don’t scale or work in complex environments. When organizations employ limited automation</description>
      </item>
      <item>
         <title>How AIOps is helping William Hill analyze large volumes of digital data</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/how-aiops-is-helping-william-hill-analyze-large-volumes-of-digital-data</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/how-aiops-is-helping-william-hill-analyze-large-volumes-of-digital-data</guid>
         <pubDate>June 10, 2018</pubDate>
         <description>How AIOps is impacting the future of IT Ops at one of the world's leading betting and gaming companies. We recently caught up with Andrew Longmuir who heads up the Capacity and Monitoring Engineering team at William Hill to learn more about how AIOps is impacting the future of IT Ops at one of the world’s leading betting and gaming companies. In the Q&amp;A below, Andrew shared with us how AIOps is helping William Hill analyze large volumes of digital data—making it easier to solve hard problems. With a new services model, it is imperative that to have comprehensive visibility of IT operational data across the entire digital delivery chain to speed service delivery, increase IT efficiency and deliver a superior user experience. But there’s also the human factor. There is a real skills gap and we need to rely on automation and machines even down to driving self-writing code, since there will not be enough developers around. Why traditional approaches to IT Ops monitoring aren’t working anymore? By moving to modern technologies and cloud we create a proliferation of object, services and metrics, a metric explosion if you like. Traditional approaches to monitoring will not work. You can’t have a static threshold or policy defined for every scenario, managed by humans, it’s impossible. As humans, we are all subject to cognitive limitation, algorithms on the other hand are capable of processing millions of events and deriving meaning from large datasets. William Hill have been on this journey for 18 months and it’s a challenging problem. Services are the new focal point, it’s a different paradigm. I think good example is that traditionally we would light up dashboards like a Christmas tree if something went down. This is perfectly normal in a containerized environment due to their ephemeral nature. It’s</description>
      </item>
      <item>
         <title>Automate Network Monitoring Through Powerful APIs</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/automate-network-monitoring-through-powerful-apis</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/automate-network-monitoring-through-powerful-apis</guid>
         <pubDate>June 24, 2018</pubDate>
         <description>NetOps needs to automate simple but manual network monitoring processes to ensure success in today's digital world. You recently just got started with CA Performance Management; the industry leading unified and scalable network monitoring tools for traditional, SDN and cloud networks. But you have thousands of devices to be monitored and they all need to get added to the system. CA Performance Management has out of the box functionality to load a list of IP addresses or address ranges for discovery. But this is just the initial aspect and you're looking for enhanced capabilities. Like in most enterprises, your environment is not static: devices are added and removed, some are rebuilt and temporarily in maintenance. Network monitoring should of course handle all these cases appropriately; and automatically for some of them, like detecting changes on a device. For other cases, like handling of devices that are in maintenance or that no longer exist, someone has to notify the system. This type of information resides in a configuration database. But is it possible to forward changes in the configuration system to the network monitoring solution programmatically? Yes, thanks to the CA Performance Management REST webservices API! It provides powerful functionality such as: lookups for tenant, IP domains and existing profiles creation of a profile and addition or removal of addresses to/from a profile starting a discovery removing devices changing the lifecycle state of a device CA's engineering services in cooperation with the PreSales teams created the manageDevices script. It provides a single interface to these functions. Script input is a CSV file with the IP addresses that needs to be processed. That file gets created from the configuration database. Using this automated approach simplifies the management of network monitoring and ensures its consistency with your configuration database. The script along with</description>
      </item>
      <item>
         <title>Assurance Across Traditional and Software Defined Networking Stacks</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/assurance-across-traditional-and-software-defined-networking-stacks</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/assurance-across-traditional-and-software-defined-networking-stacks</guid>
         <pubDate>April 22, 2018</pubDate>
         <description>Finding harmony in today's digital business age depends on advanced network monitoring strategies across the old network and new software defined networking architectures. The network is the most important delivery mechanism for your brand. It's how you deliver your brand and the core to your digital transformation strategy to consistently deliver an amazing customer experience. Because the network is key to delivering the customer experience, it deserves a full stack network monitoring strategy; for traditional networks of today and software defined networking (SDN) of tomorrow. In the service provider world, it is not unheard of to have 3-minute SLA's for virtual network service delivery. From the customer request, to provisioning to billing for that service. Evolving network operations (NetOps) monitoring for today's service providers enables them to be innovative and competitive with fast time to delivery. Figure 1: One network monitoring experience for traditional and software-defined networking stacks is critical for NetOps success. Now that software defined networking is shifting from service providers to the enterprise, we see software-defined data center (SDDC) and SD-WAN use cases emerging. Where opinions of orchestration systems will make dynamic changes to the network and resources based on application demand. Our days of &quot;set it and forget it&quot; network administration is over and consistently reminds us that it is not about network component monitoring anymore; it is about network service monitoring. That's why it is so important that we do traditional and new SDN network monitoring across the entire stack; in a single view. Not only because we have to get away from swivel chair monitoring with multiple admins tools; but to view and correlate all network activity spanning numerous technologies; across the underlay and overlay layer €“ in one context and in one dashboard €“ scaling up and across the old and new networks.</description>
      </item>
      <item>
         <title>AIOps: Helping SREs Predict the Future?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/aiops-helping-sres-predict-the-future</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/aiops-helping-sres-predict-the-future</guid>
         <pubDate>June 12, 2018</pubDate>
         <description>As a kid I grew up reading a lot of science fiction. My forbearing parents used to let me take out from the library the max number of books each week they would allow (30, I still remember that number). And each week I would go back for more. Given this constant consumption of augury you would think something I read would have prepared me for the future we now face within the Operations space. While there are definitely some inklings in the science fiction canon about computer systems constructed at such scale that they would be hard for humans to understand, there is precious little attention paid to what it would take to operate them in production. Welcome to my world (and your reality, too, I bet). At the upcoming AIOps Virtual Summit on June 20, we're going to be discussing two separate approaches to handling this level of complexity and how they intersect. The first is the engineering discipline known as Site Reliability Engineering (SRE) which aims to engineer failure out of the system. The second, AIOps, is a newly coined term for the application of a class of advanced algorithms to the massive corpus of operational data we are now accumulating just as part of the ordinary day-to-day activity of running all of these systems and services. One goal of the former is to construct a set of operational practices that allow us to navigate the tricky path between a desired feature velocity (iterating the software as fast as possible to provide the features a business needs to provide to its customer base) and a desired level of operational stability (keeping the system available for those customers). This is trickier than it sounds for at least three reasons: There are often completely different sets of people working</description>
      </item>
      <item>
         <title>Dude, Where's My Self-Driving App? - Level 1: AIOps Anomaly Detection and Algorithmic Noise Reduction</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/dude-where-s-my-self-driving-app-level-1-aiops-anomaly-detection-and-algorithmic-noise-reduction</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/dude-where-s-my-self-driving-app-level-1-aiops-anomaly-detection-and-algorithmic-noise-reduction</guid>
         <pubDate>October 8, 2018</pubDate>
         <description>AIOps Anomaly Detection and Algorithmic Noise Reduction I’m a sucker for sci-fi books and movies. Especially those that present far-fetched concepts, only for them to quickly become reality. Like for example Johnny Cab from the 1990 flick – Total Recall. In the movie (loosely based on a short-story by master writer Phillip K. Dick), the hero hops into a driverless car called a Johnny Cab. It’s fully autonomous, complete with a mannequin-like figure called Johnny that interacts with passengers in an annoying but all too familiar way. Not surprisingly, Johnny ends up smashed to pieces. At the time driver-less cars seemed fanciful, but now they are coming soon to automotive dealer near you. It’s not a question of if but when, and the impact will be incredible. Fully autonomous, they’ll be optimized for efficiency, dropping passengers off at their destination and then returning home. They’ll be safer too. Today, drivers rely on one set of eyes to drive safely (two if like me you have a back-seat partner driver), but a driver-less car will process hundreds, even thousands of inputs simultaneously from a vast array of sensors. So, IT operations dudes, if driver-less cars are within reach, where's my driver-less app? That was a question I raised in a recent blog and which was further dissected in an Artificial Intelligence for IT Operations (AIOps) virtual summit keynote presentation. If we're reaching a point where car steering wheels are the new coffee-cup holders, then surely IT monitoring can advance to a nirvana state. A state where AI and machine learning, also known as AIOps, transform reactive and backward-looking monitoring into a fully autonomous function that learns nLd constantly optimizes applications according to the business outcomes they support. Of course, we’re not yet at a point where steering wheels are an optional</description>
      </item>
      <item>
         <title>Four Reasons You Need To Rethink Your Server Monitoring Approach -</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/four-reasons-you-need-to-rethink-your-server-monitoring-approach</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/four-reasons-you-need-to-rethink-your-server-monitoring-approach</guid>
         <pubDate>October 24, 2018</pubDate>
         <description>In the application economy, delivering improved, innovative digital experiences to users and customers is critical. For the IT operations teams responsible for supporting these digital experiences, the stakes continue to grow and systems play a critical role in whether organizations can meet their customers' and users' demands for digital services. Now, it's more critical than ever to track, manage and improve server performance. Consequently, server monitoring software and strategy represents a vital effort- one with real bottom-line consequences. While tracking, managing and improving the performance of servers is more vital than ever, these efforts are also more difficult than ever. More diverse. Gone are the days of standardized, homogeneous server implementations and technology environments. For IT teams, the number of server technologies, platforms and deployment models continues to expand. Teams need to support multiple server platforms running in on-premises deployments, public clouds, private clouds, hyper-converged systems and more. More dynamic. Technical environments continue to grow more dynamic as the use of virtualization, containers and cloud services continues to proliferate. Further, as organizations continue to embrace agile and DevOps approaches, applications and their supporting infrastructures continue to evolve much more rapidly, and servers are no exception. More expansive. Not only are server environments more complex, they continue to grow in scope. The data volume being managed and the number of systems that need to be supported continue to grow, as do the associated costs and workloads-putting increased strain on already stretched teams. More interrelated. While servers represent fundamental assets, they are ultimately one of many different services and elements that comprise the environment. All these different assets need to be aligned and performing efficiently if a quality user experience is to be delivered. As a result, it continues to grow more difficult to gain a cohesive view of the environment and,</description>
      </item>
      <item>
         <title>Look Beyond The Watch Tower for AWS Monitoring</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/look-beyond-the-watch-tower-for-aws-monitoring</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/look-beyond-the-watch-tower-for-aws-monitoring</guid>
         <pubDate>May 1, 2018</pubDate>
         <description>Amazon Web Services (AWS) leads the public cloud market in share and revenue. Now a $17 billion+ business for Amazon with 45% YoY growth, offers numerous IaaS, PaaS, and SaaS services. Along with a vast global presence offered through a growing number of regional datacenters. Amazon's growth attests to how much cloud usage is becoming the norm and AWS continues to take the pole position. Other popular offerings like Azure and Google Cloud platform are growing but still have lot of ground to catch up. The most frustrating issue for any cloud customer is that he/she is at the mercy of the cloud Service Providers to know of any outages. The Service Provider may not know of an outage affecting your tenancy, and may not post notifications on their service portals timely or frequently enough - and as an IT admin, you are helpless fielding calls from frustrated users. IT monitoring, is both the art and science of conducting a planned sequence of probes or measurements of an IT component or task to assess whether it is operating within desired operational parameters and providing that information via visualization, alerting or notifications. To perform these actions every cloud provider has a set of tools/ API's to provide data around the key performance indicators. To understand how effective are native tools, it is important to consider where gaps exist in the IT infrastructure for organizations adopting AWS. Gaps in monitoring are also common in on premises IT environments because organizations rarely have the budget, personnel or justification for every monitoring tool required. Moreover, AWS, like other cloud providers, provides a service and does not reveal all the internals of that service. CA Unified Infrastructure Management (UIM) takes 4 prong approaches for making sure IT admin needs for AWS cloud monitoring is met:</description>
      </item>
      <item>
         <title>Switching Infrastructure Monitoring Tools: Making The Go/No-Go Decision - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/switching-infrastructure-monitoring-tools-making-the-go-no-go-decision-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/switching-infrastructure-monitoring-tools-making-the-go-no-go-decision-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>June 1, 2018</pubDate>
         <description>In today's software centric world, infrastructures are more important than ever. They can make or break an application's experience whether external or internal. Thus, monitoring the performance of its infrastructure is critical and pretty much everyone has some kind of default monitoring tools deployed to do so. But as infrastructures become more dynamic and hybrid in nature, these tools need to be re-evaluated. There's a lot to consider when switching infrastructure monitoring tools, but this article will help easy into how to make the go/no-go decision. It's easier said than done though when IT budgets are limited. There are fancier tools and technologies in the market, competing for the same budget. Is changing your point, reactive monitoring tools even worth the effort? Well it certainly is. Recently I had the opportunity to work with Forrester Consulting to quantify some of key benefits customers achieved who switched to CA Unified Infrastructure Management (CA UIM), an award winning, single solution for monitoring today's modern, hybrid IT infrastructures. The results were astounding with 321% ROI. Pretty much all the customers interviewed has some sort of monitoring tools in place. Taking CA UIM's model as a sample, I will provide you with four key benefits you can easily calculate in order to gauge the value for your organization of possibly changing your infrastructure monitoring tools. I have put in a simple formula you can use to estimate the potential benefits to your organization and also provided the number and/or percentage customers achieved with CA UIM as an example to help with your calculations. Note: FTE acronym used in the formula means Full-Time Resource (1) Savings from fewer triage calls If your new solution gives you a single, proactive view into your entire infrastructure stack be it VM's or containers, cloud or on-premise systems, your</description>
      </item>
      <item>
         <title>Easily Ingesting Third Party Data Sources Into Operational Intelligence</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/easily-ingesting-third-party-data-sources-into-operational-intelligence</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/easily-ingesting-third-party-data-sources-into-operational-intelligence</guid>
         <pubDate>October 14, 2018</pubDate>
         <description>Digital Operational Intelligence aims to collect millions of data points for different data types into a single data lake. In large scale distributed systems, it is very difficult to upload relevant custom data into a data lake and analyze the data files. This data holds information about the device's behaviors, trends, patterns and most end-to-end data flows. By applying analytics on the data types or in tandem with each other, meaningful insights can be generated. Cue the &quot;Generic API Connector.&quot; The intent of the Generic Connector is to make it easy to ingest data from many data sources. The connector provides data ingestion through RestAPIs and files. The Generic API Connector provides the capability to configure and ingest data through a RestAPI. Below are key features of a generic API connector €“ 1. Create and update profiles as per data sources 2. Listing-out or fetching available profiles 3. Listing-out the custom indices fields (List Custom Fields API) 4. Listing-out the 3rd party datasource fields (List Source Fields API) 5. Listing-out the max/default values (List defaultOrMaxValues API) Data Types Supported List of data types compatible: 1. Alerts and Alarms 2. Metrics 3. Logs 4. Events 5. Inventory 6. Groups/Service Supported Authorization Types Supported authorization mechanisms are listed below: 3rd Party Data Source Requirements To add a new API data-source, please consider the below points and use as a check-list- Is the authorization mechanism listed in the compatible authorization types mentioned above? If the authorization type is not listed then contact CA Support to add/implement the new authorization mechanism. Data source REST API end points should be available. Data needs to be exposed with HTTP GET requests. To get REST API details, search the new data-source documentations and identify all available resources on the API. Refer to the Data Source API documentations</description>
      </item>
      <item>
         <title>Monitor Your Data Center's Health over IPMI Using CA Unified Infrastructure Management</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/monitor-your-data-center-s-health-over-ipmi-using-ca-unified-infrastructure-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/monitor-your-data-center-s-health-over-ipmi-using-ca-unified-infrastructure-management</guid>
         <pubDate>April 30, 2018</pubDate>
         <description>The Intelligent Platform Management Interface (IPMI) is an abstract, standardized, and message-based interface for hardware-based platform management systems, making it possible to control and monitor servers centrally (Figure 1) . IPMI operates independently of the operating system (OS) to allow administrators to manage a system remotely, even in the absence of an operating system or of the system management software. This makes it an ideal choice for hardware health and failure monitoring. Originally developed by Intel in the late 1990s (in cooperation with Dell, Hewlett Packard and NEC), IPMI has grown to become an industry standard interface, supported across 200+ hardware vendors. Figure 1: Interfaces to the baseboard management controller (BMC) CA Unified Infrastructure Management (UIM) provides a comprehensive solution for data center monitoring using the CA ecoMeter probe, retrieving energy, power and other hardware health data from the target devices. The CA ecoMeter probe collects health information from devices using native protocols (like IPMI, BACnet, Modbus, SNMP, WMI, RF Code and OPC) (Figure 2), and unifies this information using its internally stored MIB tables. Figure 2: IPMI software stack In this post, we will outline a simple, step-by-step process to configure and use the CA ecoMeter probe in your data center environment for IPMI-based devices, so you can start monitoring their health metrics effectively with a minimal amount of configuration. Step 1: Prerequisites One of the key prerequisites for connecting the ecoMeter probe to the target device is to enable IPMI-over-LAN on your device native configuration. In addition, you will need to provide the IPMI user with administrator privileges, with access to all the relevant IPMI roles. Please look up your target device’s vendor documentation to find the exact settings. For example, for the Dell iDRAC 6 device family, the setting can be found under Remote Access &gt; Network/Security</description>
      </item>
      <item>
         <title>Three Essentials for Service-Driven Autonomous Remediation in AIOps Platforms</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/three-essentials-for-service-driven-autonomous-remediation-in-aiops-platforms</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/three-essentials-for-service-driven-autonomous-remediation-in-aiops-platforms</guid>
         <pubDate>October 23, 2018</pubDate>
         <description>In my last blog I talked about how Point, Reactive Monitoring and Automation Tools Don't Work For Today's Digital Businesses and why you need service-driven autonomous remediation. Here's three essentials for this capability to work: Predictive identification of potential risks to services Leveraging traditional, reactive monitoring tools and approaches, IT teams lack the insights needed to effectively predict issues before a business service or application is disrupted. Given the criticality of delivering a phenomenal user experience, these teams need an AIOps platform that offers algorithmic- or machine-learning-based insights for detecting abnormal behaviors and predicting potential issues. It's also essential that AIOps platforms offer capabilities for mapping issues to associated services, so IT teams can intelligently prioritize troubleshooting and remediation efforts based on which issues will have the biggest potential business impact. For example, if two issues arise and administrators can see that one is affecting a payroll service that isn't being run currently, and another is hitting an e-commerce service that runs 24/7 and accounts for the bulk of the company's revenues, they can prioritize their efforts accordingly. Automate root cause analysis across domains and technologies Even with the best predictive tools in place, downtime and performance issues may still arise, whether due to an administrator's configuration error, external service outages or a host of other causes. Within many IT organizations, when these performance issues or downtime occur, operators struggle to determine why. While a single issue may be the culprit, large numbers of redundant or false alerts may be generated, making it difficult for administrators to filter through the noise and identify the issue that needs to be addressed. At the same time, when operators see that a service is experiencing issues, it may be difficult to determine how or if the issue is affecting business services. To combat</description>
      </item>
      <item>
         <title>What is a Unified Data Model, and Why Would You Use It? - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/what-is-a-unified-data-model-and-why-would-you-use-it-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/what-is-a-unified-data-model-and-why-would-you-use-it-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>March 4, 2019</pubDate>
         <description>Managing modern application environments is hard. A unified data model can make it easier. Here's how. The nature of modern app environments Modern distributed application systems are growing increasingly complex. Not only are they larger and spread across scale-out environments, but they are also composed of more layers, due especially to the trend toward software-defined networking, storage and everything else. The environments are also highly dynamic, with configurations that are auto-updated on a recurring basis. Add to this picture microservices architectures and hybrid clouds, and things get even more complex. Whereas in the past you would typically have run a monolithic application in a static environment on a single server, today you probably have containerized microservices distributed across clusters of servers, using software-defined networks and storage layers. Even if you have simpler virtual machines, your infrastructure is still likely to be highly distributed, and your machine images might move between host servers. This complexity makes it difficult to map, manage and integrate multiple tools within your environment, especially when each tool uses its own data model. It creates multiple issues for DevOps practitioners and developers alike. What is a Unified Data Model? This is why organizations are increasingly adopting unified data models. A unified data model creates an opportunity for an organization to analyze data from multiple sources in the context of shared business initiatives. A unified data model forces your DevOps and development teams to determine the methods, practices, and architectural patterns that correlate to the best outcomes in your organization. It will also force your institution to future-proof your data architecture by leveraging new technology data types and attributes. As the complexity of systems increases, diminishing returns of different data modeling impact our ability to maintain and monitor web applications. Individual modeling for different systems creates a contextual</description>
      </item>
      <item>
         <title>CA APM Team Center Best Practices for Effective Monitoring</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/ca-apm-team-center-best-practices-for-effective-monitoring</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/ca-apm-team-center-best-practices-for-effective-monitoring</guid>
         <pubDate>March 27, 2018</pubDate>
         <description>How to reduce complexity with CA APM Team Center CA APM Team Center (ATC) is used to visualize your application component interaction. As the applications are starting up, let's say for the first time or if a new service is added, the CA APM agent automatically detects the new call path and sends that information to enterprise manager (EM). The EM collects all these call paths or traces and stitches them together to provide the end to end view. This is extremely powerful and helps customers get a bird's eye view of the request flow. For a large complex environment, the map could get extremely complicated with lots of vertices and edges. The key to making the map meaningful and relevant is to leverage the rich feature set that CA APM Team Center offers and apply some best practices to make the map useful. Let's look at a couple of them. Attributes Attributes are the meta data that provides additional information about a particular vertex or edge in ATC €“ e.g. &quot;owner&quot; attribute provides information about the owner. CA Application Performance Management (APM) comes with rich set of out-of-the-box (OOTB) attributes that can be further enhanced by adding additional custom attributes. When getting started, we recommend you check to see if we have the relevant attribute OOTB, if not can either use attribute rules, manual update or rest API to update the attributes to accommodate your needs. Filters Filters provides a mechanism to view only what is relevant. Filters can be set on various attributes and they can be &quot;grouped&quot; in logical &quot;and&quot; and &quot;or&quot; manner. This is an extremely important capability to reduce the complexity. Ensure the proper filter is set. Layered perspective The latest APM release enhances the already powerful perspective feature by providing a layering capability. The</description>
      </item>
      <item>
         <title>Even Movies are a Modern Business</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/even-movies-are-a-modern-business-rally-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/even-movies-are-a-modern-business-rally-software</guid>
         <pubDate>August 1, 2018</pubDate>
         <description>I was browsing Instagram the other day, and I saw this post from Dwayne Johnson (aka The Rock), one of the most famous people on the planet. One of the things that immediately struck me is how relevant it was to what we, as Agile enthusiasts and evangelists, strive to do every day. What would happen if I replaced the word &quot;audience&quot; with &quot;customer&quot;? Well, like replacing a cup of fine coffee with Folger's crystals[1] (raise your hand if you remember that commercial), let's see what happens when we do that. 1. It has to be customer first. Within the principles of the Agile Manifesto, the highest priority is to satisfy the customer[2]. That means that before we can do anything meaningful on the path to our shared success, we have to put our egos and many years of €˜experience' aside. We need to put our customer in front. Too often, as agilists trying to solve a problem, we are quick to put forward what we believe to be the problem (and the solution) before we fully understand what is going on from the customer's point of view. This isn't to say that we shouldn't apply our knowledge to solve the issue the customer has; after all, that's what we're there for in the first place. However, it's important to understand that at this sentence's core, before we can help our customer be successful, we need to make sure we make a real effort at understanding their perspective. Achieving a customer-centric view will prepare us well to achieve Mr. Johnson's second question: 2. What does the customer want? Now that we understand our customer's viewpoint, we must use that as a foundation to make sure we actually know what they want, or to put it more directly, to discover their</description>
      </item>
      <item>
         <title>Automating NCM for Software Defined Networking</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/automating-ncm-for-software-defined-networking</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/automating-ncm-for-software-defined-networking</guid>
         <pubDate>February 18, 2019</pubDate>
         <description>Explore CA Spectrum's software defined networking automation capabilities for intuitive and simple network configuration management. Network Configuration Management (NCM) is the process of managing, monitoring and maintaining elements in the networks. NCM is more important than ever before as today's enterprise networks becomes larger and more complex due to software defined networking (SDN) and cloud. Such work may become repetitive, time consuming or eat up valuable operational team's resources. In-spite of the repetitive nature of these task; NCM continues to be a critical component of network management. To avoid such situations we need a certain level of automation in network configuration management tasks. Typically administrators rely on scripting for their automation needs and at times, lack of scripting expertise makes NCM tasks complex and time intensive. As networks become increasingly complex with a mix of traditional and software defined networking technologies, a network operations team's time is best consumed for more complex tasks and triage and should increasingly rely on automation to take care of routine and repetitive items. But some aspects of NCM which require scripting would mean specialized skills of which there may be dearth in the marketplace. Having advanced network monitoring tools which makes network configuration management tasks intuitive and possible by simple clicks in an GUI vs having to write scripts is a welcomed alternative for today's network operations teams. Since CA Spectrum administrators are already familiar with this industry-leading network event and fault management toolset; using simple clicks via the GUI speeds up their process and reduces operational workload. Let's explore CA Spectrum's Network Configuration Management for intuitive and simple network configuration management possibilities. Global synchronization tasks on your network. Running the global synchronization task enables the Network Configuration Manager to capture and save all device configurations. Network Configuration Manager attempts to capture device configurations</description>
      </item>
      <item>
         <title>PPM 101: Resource management made easy</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/ppm-101-resource-management-made-easy-clarity-ppm-project-portfolio-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/ppm-101-resource-management-made-easy-clarity-ppm-project-portfolio-management</guid>
         <pubDate>February 6, 2019</pubDate>
         <description>We recently published an eBook that looked at some of the challenges faced by resource owners when working in projects today. Building from that we wanted to understand how Clarity PPM helps project managers overcome those challenges and we reached out to Broadcom's very own resource management expert, Dave Sprague. Dave is a product management professional with 20 years in diverse industry segments. He has been with the Clarity PPM team for almost three years. We started our discussion by asking Dave to complete the sentence: &quot;Clarity PPM improves the capability of resource owners by:&quot; &quot;Filtering available resources and investments down to the department or team level, allowing the resource manager to match supply and demand across the enterprise,&quot; said Dave. &quot;Multi-value searches, like capacity based on role and geography, optimize an organization's already stretched staff. Once the resource manager finds the right people, they can allocate specific percentages of their workload without having to use a full-time equivalent calculation.&quot; This is not just a new way of looking at resource information, it's a fundamentally different approach to the discipline of resource management: recognizing that availability is a complex discipline that is far trickier than simply asking, who has bandwidth? Resource management should be more than resource allocation, yet for many organizations, and many tools, it's simply an exercise in finding an approximately correctly skilled and available person to match the need. Elevating resource management to a more strategic discipline takes things to a much different level and Dave went on to explain just why this was so important to resource owners. &quot;From a resource perspective, resource managers gain visibility into all work for which their people are engaged within a familiar Excel paradigm. With telescoping, for example, resource managers are able to focus directly on identifying and solving problems.</description>
      </item>
      <item>
         <title>AIOps-Fueled Digital Experiences: What it Takes to Win the RACE</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/aiops-fueled-digital-experiences-what-it-takes-to-win-the-race</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/aiops-fueled-digital-experiences-what-it-takes-to-win-the-race</guid>
         <pubDate>June 12, 2018</pubDate>
         <description>Across industries and markets, your competitors are in a race to deliver innovative, consistently optimized digital experiences. Increasingly, this is the race that will separate the market victors from the rest. While optimizing service levels is critical in this endeavor, it's to be getting more challenging to do every day. Here are two key reasons: Complexity. Most enterprise-class business services now rely not only on traditional systems, including on-premises mainframes and distributed platforms, but on a plethora of new, dynamic technologies, such as containers, cloud delivery models, virtual and software-defined components, and more. Scale. The volume, variety, and velocity of data that needs to be managed, correlated, and analyzed continues to grow dramatically. In the wake of initiatives like multi-cloud deployments, microservices development, and Internet of Things (IoT) implementations, teams continue to see explosive growth in the operational data being generated. Ultimately, your internal team members simply can't keep pace. To understand the changing nature of complexity and scale, consider the explosive growth in per-host metrics associated with the move to containers. Traditionally, there would be around 150 metrics per host to track, with around 100 relating to the operating system and 50 to an application. Contrast this with a container-based implementation, where there will be 50 metrics per container and 50 metrics per orchestrator on the host. It's quite common to have a cluster running upwards of 100 containers on top of two underlying hosts. As opposed to a traditional implementation where running two hosts would require the monitoring of 300 metrics, in a container-based implementation, there would be over 10,000 metrics to track. How the R.A.C.E. is Won To deliver optimized user experiences and contend with the explosive growth in data, complexity, and user demands, your IT teams need to leverage Artificial Intelligence for IT operations (AIOps) capabilities.</description>
      </item>
      <item>
         <title>Enable Auto-Remediation and Remove Manual Processes with CA UIM 9.0.2</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/enable-auto-remediation-and-remove-manual-processes-with-ca-uim-9-0-2</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/enable-auto-remediation-and-remove-manual-processes-with-ca-uim-9-0-2</guid>
         <pubDate>April 1, 2019</pubDate>
         <description>CA Unified Infrastructure Management 9.0.2 is ready to make the monitoring process easier with the enablement of auto-remediation. By leveraging webhooks, you can now eliminate manual processes that you would normally take once an alarm was raised, making the triage process faster and more efficient. This auto-remediation process begins with the messagegtw probe, which uses a webhook, a simple notification message using an HTTP POST, to post alarms to external applications. As the messagegtw probe monitors your infrastructure and tracks metrics, an alarm is created if any thresholds are breached, which results in a webhook being triggered. Through a REST API, the webhook is able to post this alarm to any third party application. This third party application then starts the auto-remediation process, and publishes the job status back to CA UIM once it is completed. A common application of this would be to invoke a REST API of an external application with a customizable JSON payload. The payload can be customized and configured to use variable substitution from configuration item, metric, and the alarm. You can then, for instance, use the webhook to trigger an issue in JIRA, send an SMS message using Twilio, post a message to a Slack channel, or send the alarms to any other third-party application or web page. Here are the steps you need to take to configure the messagegtw probe so you can use webhooks to start the auto-remediation process. Verify Prerequisites (Optional) Configure General Properties Configure REST Endpoint/Webhook Verify Prerequisites Create an Auto-Operator in NAS for webhook Specify the instance of messagegtw to which the queue should post Create an 'attach' queue in the hub; the queue that you create is used by messagegtw to publish messages to webhooks Configure General Properties Follow these steps: Navigate to the Setup node and update</description>
      </item>
      <item>
         <title>Business Success Begins with Strategic Roadmapping</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/effective-strategy-execution-begins-with-business-roadmapping-clarity-ppm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/effective-strategy-execution-begins-with-business-roadmapping-clarity-ppm</guid>
         <pubDate>March 11, 2018</pubDate>
         <description>The advent of digital business has shortened the timing horizon for all aspects of running a business. New, improved products and services need to reach the market at turbo speed. Cost structures need to become more efficient and more sustainable. Future operating plans will need to seamlessly change at a moment’s notice due to the next groundbreaking innovation announcement. Now, juxtapose this future with today’s reality. In 2018, the majority of organizations still can’t get their strategy executed or more than 65 percent of their projects successfully completed. Clearly something has to change, and it needs to change now. So what’s the answer? Essentially, organizations need to move from a false expectation of certainty to a certain expectation of continuous change. That means today’s slow-moving, process-heavy strategic planning approaches are legacies that organizations can no longer afford. A new and very different mind-set is required, and at the risk of using a word that has become passé, we need to embrace a new paradigm that focuses on continuously informed decision making rather than rules, paperwork and oversight. How can a company build a culture that values real-time decision-making capabilities? One easy place to start is by defining some simple ways in which information and ideas can be discussed at all levels of the organization. Based on experience, we strongly recommend adopting the concept of simple, lightweight strategic roadmaps to facilitate business unit–enterprise wide discussions. Notice the word “lightweight” in front of the words “strategic roadmapping.” We’ve been advocating strategic roadmapping to organizations for years, and what we’ve seen in most cases is that the first thing anyone does is try and make it too complicated. For a business unit (BU), we recommend starting with a simple timeline showing desired business outcomes and the earliest practical date they can be delivered.</description>
      </item>
      <item>
         <title>The Latest in NetOps Monitoring for Cisco and Versa SD-WAN</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/the-latest-in-netops-monitoring-for-cisco-and-versa-sd-wan</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/the-latest-in-netops-monitoring-for-cisco-and-versa-sd-wan</guid>
         <pubDate>April 2, 2019</pubDate>
         <description>CA's NetOps 19.1 release simplifies Cisco Vipela and Versa SD-WAN management to help you optimize costs and validate the best path for application delivery. We all know that SD-WAN is the hottest software-defined technology at the moment. But is it really optimized to save you money and also delivery an application experience your customers expect? Luckily SD-WAN vendors such as Cisco and Versa are providing rich programming APIs that enables your NetOps solutions from CA to leverage high-volume SD-WAN performance trend data along with capacity, cost, and projection analytics to evaluate the performance of non-guaranteed transports along with the quality of service of your WAN applications to balance cost and quality. How are we doing it in the latest release of CA NetOps 19.1? Let's break it down here with a familiar use case Use Case: I need to understand how my Cisco Viptela or Versa SD-WAN infrastructure performs and supports my policies. Basically, you need to your monitor SD-WAN networks to ensure site availability via contextual alarms, performance, and traffic metrics related to your policy defined applications. With the latest release of NetOps 19.1, we help our customers: Simplify NetOps with unified workflows combining REST, Alarm, SNMP, and Flow data. Quickly understand the WAN performance across multiple protocols and data sources to solve problems fast. Reduce the alarm noise and surface the right data at your fingertips. Easy identification of problem sites or connections tied to your WAN providers or your own infrastructure. Enhance operator visibility across complex, modern technologies. Generated threshold alarms on control and data plane inventory to better understand when and why problems are or could be occurring Ensure policy based performance via our &quot;Trusted Application Paths&quot;. Easily understand how your infrastructure supports your application policies to deliver applications across your WAN balancing both cost and</description>
      </item>
      <item>
         <title>Clarity PPM Modern Business Management: Driving Strategy</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/modern-business-management-driving-strategy-clarity-ppm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/modern-business-management-driving-strategy-clarity-ppm</guid>
         <pubDate>September 4, 2017</pubDate>
         <description>Modern Business Management has to start with leadership, but it has to reach the lower-level execution stages as quickly as possible. There's a need to more closely integrate leadership and delivery functions in order to improve the quality of project delivery. This is a critical element of modern business management, and in this post I want to look more closely at how that integration occurs, focusing on how strategy drives execution. We discussed the concept of enterprise agility –the need to adjust and evolve strategy to respond to threats and opportunities in the organization's environment. This results in strategy being a very fluid notion: While there should be directional consistency in the medium term, the specific strategic goals will evolve continuously as customer demands, market opportunities and operational necessities shift. This must result in similar ongoing adjustments of the projects that are the mechanism for delivering that strategy, in order to maintain alignment between the benefits being delivered and those that are required. For that process to be effective and efficient, it cannot involve all of the decisions being made at the strategic level. That would not only consume too much time and effort analyzing change, but it would also separate decision making from where the knowledge and understanding of the projects that need to absorb those changes resides–the execution level. Instead, decision making on the mechanics of the changes necessary to maintain alignment with strategic goals must exist within the teams that are delivering those projects. Project managers and their teams must be empowered to change project elements to ensure their initiatives still deliver €œon benefit&quot; even when that benefit has evolved from what was originally envisaged. This distributed decision making is challenging for both leadership and project functions. Leaders are relying on relatively low-level teams to make decisions</description>
      </item>
      <item>
         <title>If Your Portfolio Isn’t Executing, What Is It Doing?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/if-your-portfolio-isn-t-executing-strategy-what-is-it-doing-clarity-ppm-project-portfolio-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/if-your-portfolio-isn-t-executing-strategy-what-is-it-doing-clarity-ppm-project-portfolio-management</guid>
         <pubDate>October 3, 2017</pubDate>
         <description>One of the most significant areas in which a modern project management office (PMO) will need to make changes in the future is with its project and portfolio management approach. While the list of things that will ultimately need to be done is long, fortunately the place to start is with a few simple shifts in your current thinking: Move away from the mental model of demand management Regard all proposals as major investments of the enterprise's valuable resources (e., it isn't just about projects anymore) Begin to practice radical transparency Use &quot;contribution to strategy&quot; as your mandate and implicit authority Part of the change that is occurring with the advent of digital business is that, increasingly, an organization's portfolio of internal investment options is no longer regarded as just demand being placed on IT. It isn't that demand for IT isn't important (it's still one of the most critical execution issues), but at the front end of the portfolio process, contribution to strategy matters more. Moving up the maturity curve from demand management to practicing true portfolio management isn't an overnight activity. The first change we recommend is segmenting demand. Strategic investments do NOT belong in the same intake process as low-level service requests. Digital business requires having an intake process, supported by the right tool, that doesn't treat a multi-million dollar investment proposal with the same level of gravitas as a 40-hour change request. To put it bluntly, the difference between a Level 2 maturity PMO (process driven, tactically focused) and a modern PMO (strategically focused) is the ability to get out of the weeds. The second change we recommend for moving beyond demand management is to move the portfolio management function into a separate department and retitle it something like investment portfolio office (IPO). Your goal is</description>
      </item>
      <item>
         <title>Enterprise Level Device Availability Report in DX Infrastructure Manager 9.1</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/enterprise-level-device-availability-report-in-dx-infrastructure-manager-9-1</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/enterprise-level-device-availability-report-in-dx-infrastructure-manager-9-1</guid>
         <pubDate>June 11, 2019</pubDate>
         <description>The job of a system administrator is to make sure the data center is always available for serving the business services IT supports. If device availability is the key KPI that the IT operator, system administrator or cloud administrator is tracking, there is a crucial need for a device availability report that tracks physical, virtual or cloud devices at the enterprise level. The device availability report is composed of two parts. The first part indicates whether the device is powered up or not, i.e. system uptime. The second is device reachability, which determines whether the device is connected to the network and is reachable from endpoints. Many times, IT administrators only focus on reachability, or in other words, connectivity. If the device is not reachable, they tend to mark the device as unavailable. This method is incorrect for cases where the user is not supposed to or allowed to connect to certain devices in the data center. In these cases, if the device is powered up, it should be marked as available, where reachability is secondary. Both arguments are correct based on the situation and the role of device. For catering to both schools of thought, the DX Infrastructure Manager (formerly CA Unified Infrastructure Management) device availability report shows both the availability percentage and reachability percentage. Availability Percentage is defined as a percentage of time in a selected period where the device is powered up. Reachability Percentage is defined as a percentage of time in a selected period where the device is connected and accessible from the endpoints. Devices can also have planned/scheduled maintenance, and therefore availability and reachability percent is calculated considering this maintenance time so that the calculation is accurate. In a hyper-converged data center, there can be three types of devices, e.g. physical, virtual and cloud compute</description>
      </item>
      <item>
         <title>Modern PMO Delivers Value through Strict Focus</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/the-modern-pmo-delivers-value-through-focus-and-alignment-clarity-ppm-modern-pmo</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/the-modern-pmo-delivers-value-through-focus-and-alignment-clarity-ppm-modern-pmo</guid>
         <pubDate>November 23, 2017</pubDate>
         <description>As organizations struggle to more rapidly respond to changing market conditions, the modern PMO has two options: become part of the solution or get sidelined. According to Gartner senior research analyst Mbula Schoen, &quot;As enterprises endeavor to innovate with a range of technologies and transform the business at speed, the PMO must evolve its service and function model to support these massive changes or risk being relegated to the sidelines.&quot;[1] When there's a shift in the business paradigm, companies are often faced with a challenge and an opportunity. A shifting marketplace can be seen as a threat to current revenue or an opportunity to emerge as a leader within a new paradigm. Employees are no different, and the PMO is a perfect example. As business cycles shorten and competitive threats abound, the PMO can opt to keep a low profile and see its influence slowly erode€”or it can see the changing landscape as an opportunity to become more relevant than ever. PMOs striving for the latter must become experts in ensuring that all projects deliver business value in a fast-changing environment. Asking the following questions regularly, and answering them with real-time, relevant data, can help you achieve this. 1. Are we validating and adjusting our assumptions regularly? In today's app economy, where circumstances change quickly, planning must be a continuous exercise. The PMO should incorporate real-time modeling, optimization algorithms and collaboration tools to continuously ensure the right solutions are under development. This means monitoring developments in the marketplace, responding to the changing needs of customers and auditing internal circumstances that might impact the outcome. Projects should be adjusted regularly, and teams should be provided with a living delivery framework to guide them. 2. Are we maintaining alignment across the board? For maximum efficiency, PMOs must ensure alignment across the organization.</description>
      </item>
      <item>
         <title>Overcome Network Monitoring Challenges in Cisco ACI Environments</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/overcome-network-monitoring-challenges-in-cisco-aci-environments</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/overcome-network-monitoring-challenges-in-cisco-aci-environments</guid>
         <pubDate>April 23, 2018</pubDate>
         <description>Achieve end-to-end application response visibility and monitoring in Cisco ACI Environments. Many companies today choose a Cisco networking infrastructure to service their physical and virtual networking needs for enterprise data center operations. These enterprises also plan to migrate to the latest software defined networking (SDN) technologies to help network operations (NetOps) deploy network services quickly to respond to competitive conditions and user demand. Cisco is incorporating various new technologies, like Cisco Application Centric Infrastructure (Cisco ACI) and software defined networking (SDN) into its networking equipment but these new technologies can cause disruptions in your existing monitoring strategies. This includes mirroring technologies for packet and flow data, e.g. switched port analyzer (SPAN), remote SPAN (RSPAN), encapsulated remote SPAN (ERSPAN), and VLAN access-list (VACL) that have issues with encapsulation and other new networking technologies. All of this creates a need to have comprehensive network visibility to overcome any limitations and maximize the use of Cisco equipment; while at the same time having an advanced network monitoring strategy that enables measurement of application performance on the underlying network. Let’s review a few challenges with packet capture in a Cisco ACI environment and then discuss the CA solution that overcomes these challenges and enables proactive network troubleshooting and triage. Challenges of Data Visibility with Cisco ACI The Cisco ACI architecture focuses on distributed applications. It uses a centralized controller and an overlay structure to create, deliver and automate application policies throughout the network. Access to data monitoring can be accomplished either by use of network TAPs or SPAN-related technology, depending upon the architecture implementation. However, issues like duplicate packets and the need for data filtering capabilities still exist and can create a significant burden for many network tools. For instance, redundant traffic streams and a distributed leaf and spine architecture means that one should</description>
      </item>
      <item>
         <title>Top 3 Strategies for Successful SD-WAN Monitoring</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/top-3-strategies-for-successful-sd-wan-monitoring</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/top-3-strategies-for-successful-sd-wan-monitoring</guid>
         <pubDate>June 5, 2019</pubDate>
         <description>Broadcom's recommendations for overcoming the challenges of SD-WAN management and the recommended strategies to get the full investments out of your software-defined initiatives. SD-WAN (software defined wide area network) is probably the most successful business adoption of SDN architecture and continuously growing, IDC forecast it to reach $4.5 billion by 2022. As with many new technologies, SD-WAN deployment also comes with new challenges, especially for NetOps team, who are well trained and experts in traditional WAN management but not familiar with this &quot;new&quot; WAN. Challenges of SD-WAN Management In my interaction with network operations teams of various enterprises who are adopting SD-WAN, there is a common consensus that though SD-WAN has advantages over an expensive MPLS network, it presents new operational challenges like integrating smoothly with their existing networking monitoring solutions. This concern is very important when you consider that contrary to marketing claims; SD-WAN is not replacing MPLS but actually presents a low cost alternative for some portion of traffic which doesn't have a mission critical SLA requirement. Proactively identifying performance issues and a lack of comprehensive network visibility has always been important challenges faced by network professionals. Many network assurance providers share that a well-planned monitoring strategy increases visibility and helps in network diagnostics. SD-WAN or general SDN products, create more virtual components in an existing network thus making it more complex to manage, troubleshoot and triage. With SD-WAN, your applications also become part of network operations due to the factor that this technology allows application aware routing and uses any available transport (dynamic path selection) which satisfies a given SLA. Recommendations for Successful SD-WAN Monitoring It is very important to not monitor a new technology just for the sake of it, but to add context to that monitored data while giving business views into the application delivery</description>
      </item>
      <item>
         <title>FITPAL for Healthy Networks and Healthier Monitoring Tools</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/fitpal-for-healthy-networks-and-healthier-monitoring-tools</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/fitpal-for-healthy-networks-and-healthier-monitoring-tools</guid>
         <pubDate>February 23, 2018</pubDate>
         <description>Suddenly the network is cool again. Tech trends such as the Internet of Things, software-defined networking and growing end-user expectations all add up to a demand for &quot;dial-tone&quot; network performance and reliability. But with the data deluge continuing to accelerate, and organizations relying on multiple clouds to achieve business goals, how can your network and monitoring tools keep up, stay healthy and meet user demands? The days of &quot;set it and forget it&quot; network administration are over No longer can you wait for something to break before changes are made to keep your network running healthy. Continually monitoring and optimization of your network health is critical for business success in today's digital age. To that end, CA Technologies has coined an acronym to help enterprises remember the essentials to consider when adopting a modern monitoring tools. FITPAL, which stands for Fault, Inventory, Topology, Performance, Application, and Logs, is the converged data streams needed for advanced network visibility. A good network monitoring and management tool for traditional, software-defined networking and hybrid cloud environments should incorporate all those data streams for healthy network operations. Let's consider why each metric is important. FITPAL is the converged network data streams needed for advanced visibility into running healthy networks. It represents a baseline for the types of information that a modern network monitoring software and management platform requires for full visibility. And why is each type of data critical to your enterprise? Fault: Capturing network fault data becomes more valuable when it can be correlated to network performance issues in a &quot;single pane of glass.&quot; For example, if a fault occurs but has negligible impact on application performance, remediation can take a back seat to more pressing issues. Inventory: In hybrid or multi-cloud environments, it is imperative to have a good handle on physical, virtual</description>
      </item>
      <item>
         <title>Clarity PPM Modern Business Management: Operating Quickly</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/the-modern-business-management-environment-operating-at-the-speed-of-business-clarity-ppm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/the-modern-business-management-environment-operating-at-the-speed-of-business-clarity-ppm</guid>
         <pubDate>December 6, 2017</pubDate>
         <description>Modern business management is more than a concept, it's an integrated approach to everything an organization does – and that requires an environment built around it. You can look at modern business management from a top-down or bottom-up perspective. Success requires both elements to not only coexist but also to be fully integrated, and that leads to the subject of this blog – the fully integrated modern business management environment. Let's start with that word integration; it's the cornerstone of making MBM work. MBM combines enterprise agility with portfolio management to enable an organization to adapt and adjust quickly and with minimal disruption. That cannot happen unless every element of MBM is tightly integrated, minimizing inertia and maximizing the ability to pivot when business circumstances change. In part, that can be achieved through streamlined processes, effective communication links and common goals, but those elements will only get an organization so far. True integration that binds an organization tightly together and optimizes performance requires two other elements that on the face of it couldn't be further apart: An integrated technology platform that combines project portfolio management (PPM) functionality (investment management, planning and modeling, financial management, etc.) with project execution, collaboration and business intelligence (BI). Only when all of these elements combine can an organization obtain the information it needs to make the right decisions and the insight necessary to make those decisions swiftly and with confidence. A cultural evolution that empowers employees at all levels to do whatever is required to ensure the organization is continuously adjusting to optimize performance against opportunities, to satisfy customers' evolving needs and to overcome challenges as efficiently and effectively as possible. ​It is this cultural evolution that will take the longest to achieve but which ultimately will provide the springboard to even greater success. MBM</description>
      </item>
      <item>
         <title>A Modern PMO Nurtures an Entrepreneurial Culture</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/clarity-ppm-the-modern-pmo-nurtures-an-entrepreneurial-culture-modern-pmo</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/clarity-ppm-the-modern-pmo-nurtures-an-entrepreneurial-culture-modern-pmo</guid>
         <pubDate>January 7, 2018</pubDate>
         <description>Did you ever hear the phrase “Good ideas are a dime a dozen”? A lot of people disagree with this statement. And the truth is, it’s bad ideas that are a dime a dozen. I could illustrate a few humorous examples here, but instead I’ll direct you to the healthy pool of bad ideas on any episode of Shark Tank. Good ideas can be incredibly lucrative to your business. The trick is in taking the good idea, measuring its potential against the cost of delivering it, prioritizing it against other good ideas and then executing on it effectively and efficiently. Do these things and you’ve made a good idea into a great—and more likely profitable—product or service. That sounds like a simple plan, but it’s actually an enormous challenge for most organizations. That’s why the role of the modern PMO has evolved over the past several years—to guide this process. It starts by eliciting and nurturing an entrepreneurial culture, which is synonymous with creativity, agility and innovation—it’s where good ideas are born and raised. The modern PMO looks to leverage these same characteristics. Team oriented Start-ups lack strict and extensive hierarchical structures because most new businesses haven’t been around long enough to construct them. That’s a huge advantage when you consider the evidence that teams with flatter structures outperform those with more traditional hierarchies. Flatter, smaller teams with autonomy nurture creativity, drive productivity and improve delivery more effectively than larger teams with traditional structures. The flatter the team, the more empowered its members are when it comes to decision-making authority. This is important, because bureaucracy has the potential to significantly slow decision-making processes, which can be toxic in a fast-moving market. Further, a flat organization allows you to avoid situations in which rivalries lead to decisions that benefit one department</description>
      </item>
      <item>
         <title>Two Key Tools for the Strategic Realization Office (SRO) - Clarity PPM</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/two-key-tools-for-the-strategic-realization-office-sro-clarity-ppm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/two-key-tools-for-the-strategic-realization-office-sro-clarity-ppm</guid>
         <pubDate>August 28, 2018</pubDate>
         <description>To deliver strategy effectively, the strategic realization office, or SRO, needs to leverage two separate but closely related disciplines. In this post we want to look at how the SRO facilitates the delivery of strategy. Remember we said that modern corporate strategy had to focus on growth and innovation, and that inevitably means a focus on customer offerings. The products and services a company offers are the only sustainable way for revenue to be generated, and profitable revenue is the currency of growth. That leads us to the first tool the SRO needs to be able to leverage: product portfolio management. We’ll call it product PM for reasons that will become clear later. Product PM is an integrated strategy for managing all the products and services the company provides. It includes lifecycle management, market strategy and development approaches for individual offerings and also the investment management strategy for the collective products and services. This second element will include ensuring a balance of products across different lifecycle phases, market positioning, growth versus consolidation, etc. Product PM is owned by the executive who oversees the various product and service offerings, but the SRO has a key role to play in the process. We noted in the last post that consistent growth requires the ability to innovate continuously, and clearly that innovation must be concentrated in the products and services the organization develops. Such innovation requires investment, commitment and planning to ensure it is focused in the right areas, that the risks are managed effectively and that the results are optimized to market expectations. While all of those factors are fluid, that can only happen if there is a long-term vision for products, both individually and collectively. That is the concept of the product roadmap that sets out the long-term goals for each</description>
      </item>
      <item>
         <title>Latest in NetOps Fault and Performance Monitoring for Cisco Meraki</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/latest-in-netops-fault-and-performance-monitoring-for-cisco-meraki</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/latest-in-netops-fault-and-performance-monitoring-for-cisco-meraki</guid>
         <pubDate>May 5, 2019</pubDate>
         <description>CA NetOps 19.1 expands its technology coverage to support fault and performance monitoring of Cisco Meraki cloud-based wireless networks. Simple Network Management Protocol or SNMP has been the driving force behind strides made in the monitoring world. In years gone by, an expert in SNMP and traditional wired and wireless networks was in high demand and always had control over the network. Typically, they relied on a single tool or application to serve the purposes for monitoring and troubleshooting. The &quot;ping&quot; was the &quot;panacea&quot; which kept trouble at bay. Networks today are evolved and continuously changing with virtual devices, software-defined architectures and cloud-based controller-less wireless networks. All these technologies add several layers of complexity, force the use of separate tools and demand for expertise beyond traditional SNMP. NetOps 19.1 introduces support for Cisco Meraki WiFi across fault and performance. Our solution provides inventory and fault information from CA Spectrum and network performance from CA Performance Management. All of these network monitoring metrics are unified in the NetOps portal where we provide a single pane view for easy intelligence into the health of your Cisco Meraki WIFI networks starting with alarms for fault isolation and performance metrics for cloud controller and access points. CA NetOps portal views for alarm details on severity, root cause and fault isolation and subsequent in context details for latency and reachability trends for access points. CA NetOps portal views for alarm details on severity, root cause and fault isolation and subsequent in context details for latency and reachability trends for access points. CA NetOp views for Cisco Meraki performance metrics for controller and clients, bytes in and out for access points. CA NetOp views for Cisco Meraki performance metrics for controller and clients, bytes in and out for access points. From the NetOps portal Operators have</description>
      </item>
      <item>
         <title>API and Microservices Virtual Summit for Smart Cities, Finance and Healthcare - Layer 7® API Management</title>
         <link>https://www.broadcom.com/sw-tech-blogs/api/api-and-microservices-virtual-summit-for-smart-cities-finance-and-healthcare-layer-7-api-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/api/api-and-microservices-virtual-summit-for-smart-cities-finance-and-healthcare-layer-7-api-management</guid>
         <pubDate>August 12, 2018</pubDate>
         <description>What do banks, smart cities and healthcare organizations have in common? A need for a modern application architecture that can support getting digital business initiatives into market fast and deliver competitive customer experiences today and into the future. The issues facing all three will be part of our Modernize Application Architectures with Microservices and APIs Virtual Summit Series on August 22. While each industry use case is nuanced, most businesses are facing similar issues when it comes to digitally transforming how they operate and connect with their customers. Built with APIs, microservices and pervasive connectivity, a modern application architecture enables cloud, mobile and Internet of Things (IoT) experiences that reach across verticals to transform everything: How we shop and travel to how we deliver healthcare and emergency services. Attend the summit to get a deep-dive on how APIs and microservices form the foundation of a modern application architecture from some of the brightest minds and organizations. Among the topics that will be discussed: Smart Cities and Places: Technology and Standards Enhance Quality of Life: Deloitte€™s Hugo Serra shares how CitySynergy, an integrated city operating system that combines a central command center with API integration and security, data visualization, integration, deep analytics and process automation is helping city managers and stakeholders coordinate response efforts, protect citizens, and drive economic growth. Public Safety: Data Integration and Security When Failure is Not an Option: FirstNet is a highspeed, nationwide wireless broadband network dedicated to public safety. But besides the federal government, innumerable state and local agencies must be able to interoperate seamlessly with it to ensure public safety in every community. Learn how a modern application architecture enables this broad ecosystem to integrate, monitor and test diverse systems running on FirstNet to enable their availability and security during times of crisis. Healthcare on</description>
      </item>
      <item>
         <title>General Availability Announcement for CA NetOps 19.1</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/general-availability-announcement-for-ca-netops-19-1</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/general-availability-announcement-for-ca-netops-19-1</guid>
         <pubDate>April 1, 2019</pubDate>
         <description>On behalf of CA Technologies, a Broadcom Company, we appreciate your business and the opportunity to provide you with high-quality, innovative software and services. As part of our ongoing commitment to customer success, we regularly release updated versions of our products. We are pleased to announce the general availability of our NetOps 19.1 release. With this release, we are proud to deliver operational simplicity and improved time to value to our customers through advanced AI capabilities and unified network visibility that delivers enhanced NetOps intelligence into the user experience traversing modern architectures. NetOps 19.1 consists of the following Network Operations Suite of Products: CA Performance Management 3.7 CA Spectrum 10.3.2 CA Network Flow Analysis 10.0 CA Virtual Network Assurance 3.7 CA Mediation Manager 3.7 New features for NetOps 19.1 include: AIOps: Network Flow dashboards in CA Operational Intelligence Modern Architectures: Cisco® (Viptela®) SD-WAN Performance, Fault and Flow monitoring Modern Architectures: Versa SD-WAN Performance, Fault and Flow monitoring Modern Architectures: Cisco Meraki„¢ Wi-Fi Performance and Fault monitoring Modern Architectures: AWS® Cloud Network Performance, Fault and Flow monitoring Unification: Enhanced Unified NetOps Portal with integrated Flow and Fault monitoring Other: SNMPv3 Traps filtering and forwarding capability Other: Support for API's for Network Flow Other: Platform and third-party product updates Register here for our Webcast: &quot;What's New in NetOps v19.1&quot; on April 10, 2019 to see all these features discussed and/or demoed and get any questions answered. We have included individual Release Notes that detail the features and highlights of the NetOps products release. We also encourage you to visit the CA product informationpage on the CA Support portal at https://support.ca.com/ and DocOps.ca.com You can download your copy of products online at https://support.ca.com/ where you can also utilize CA's case management system. To install your product, follow the installation procedures for your product</description>
      </item>
      <item>
         <title>3 Similarities to Cloud and On-Prem Server Monitoring</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/3-similarities-to-cloud-and-on-prem-server-monitoring</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/3-similarities-to-cloud-and-on-prem-server-monitoring</guid>
         <pubDate>February 20, 2018</pubDate>
         <description>Countless hours are spent evaluating, testing, migrating and building cloud ready and cloud native applications. The new era of cloud computing is driving data center migrations and application development, reluctantly managing operations management. Without a doubt, there are many new avenues to pursue server monitoring from a cloud prospective. Let's take a look for a moment, at the similarities between the two. The valuable experience that has been gained with years of experience of monitoring traditional infrastructure may still apply, and can be used inform the new age of operations AKA cloud monitoring. 1. Beyond Compute First there was physical equipment, running at capacity, costing time and resources €“ requiring an increase in resources. , . Later, virtualization increased scale and on-demand resource allocation for critical applications. Then, the cloud burst with cleansing rain promising server-less compute, high availability across geographic regions, and most importantly lower barriers to entry and as needed resource allocation. All the while, the application responsiveness and end user experience remained the key to success across these architectures. The compute resources themselves come from any combination of these stages, but what about the application? While compute has long been the key to accurate performance management, it's critical to take an extended view of the environment to what's being run there. Be it SQL server, Apache webservices, or cloud resources like Office 365, having the application view of resource consumption and any latency associated with it completes the picture. Having the agentless and agent based ability to capture those critical metrics is key. 2. Hypervisor Perspective In much the same way, cloud infrastructures today function as a large virtualized environment. Historically, NOC teams have relied on the agent based function to capture information on bare metal systems. That same window into reality exists in cloud platforms today</description>
      </item>
      <item>
         <title>What's So Special with Your PPM?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/what-s-so-different-with-your-ppm-clarity-ppm-project-portfolio-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/what-s-so-different-with-your-ppm-clarity-ppm-project-portfolio-management</guid>
         <pubDate>February 14, 2019</pubDate>
         <description>If you’re reading this you’re interested in PPM. You may have a solution already, or you may be looking to implement one. And every vendor will tell you theirs is different. But when you ask how it’s different you don’t get much - a different interface, a couple of workflow variations, but that’s about it. Clarity PPM really is different. Its engineers defined project portfolio management (PPM) processes in the early 2000s and our competitors are still working to refine them. We’ve moved on, recognizing that traditional PPM approaches in isolation don’t work anymore because of the way the world has evolved. We view PPM not as a series of different functions, loosely tied together, but as a single integrated platform that makes it simple to get work done, usable in the real world to simplify that work and powerful enough to make business more effective. We achieve that with a number of powerful elements. Strategic roadmaps bring executive driven top-down planning to the table to provide a results focused addition to traditional bottom-up planning. Clarity PPM is accessible to all users, regardless of their background, allowing all business leaders to spin up projects in minutes. Those projects can then be managed intuitively with task boards and scoreboards that allow relevant information to be shown in ways that work for each person. Team collaboration eliminates the administrative overhead of long meetings, providing meaningful digital collaboration that allows for the sharing of information in ways that work for teams. In addition, we free those team members to work whenever and wherever they want with integration of key functionality like time tracking with mobile devices. We then power the entire platform with business intelligence and analytics to create a solution that truly supports innovative growth. If you want a simple, usable, powerful</description>
      </item>
      <item>
         <title>API Discovery: The Most Overlooked Element of Your API Program - Layer 7® API Management</title>
         <link>https://www.broadcom.com/sw-tech-blogs/api/api-discovery-the-most-overlooked-element-of-your-api-program-layer-7-api-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/api/api-discovery-the-most-overlooked-element-of-your-api-program-layer-7-api-management</guid>
         <pubDate>September 25, 2018</pubDate>
         <description>So, why is API discovery the most overlooked element of your API program? Simply because most API programs start out as technical projects with an API here and there for integration purposes, enabling partners, or getting that first mobile app out the door. As organizations continue to buy into the benefits of APIs, and as adoption continues, the top priorities typically are making sure those APIs are secure, scalable and performant. And how architecture teams usually start with addressing those requirements, is by investing in an API Gateway. But often that's where progress stops for a while in the maturity of API programs. Continue to read on to learn about the next level up in building a full blown API ecosystem: Enabling self-service API discovery and consumption for developers and partners. What is API Discovery and Why is it Important? For internal API programs: API discovery is all about enabling development efficiency and innovation. If API developers are creating services that provide access to applications or model digital business capabilities, it's important to make those capabilities discoverable and easy to use. If an API is easy to find, easy to understand, and easy to get access to, developers and partners can build valuable apps and integrations that much faster and easier. Following best-practices for API discovery also prevents building duplicate APIs because a developer doesn't know the other one already exists, and spurs ideas for new ways to add value by mixing, matching and combining digital capabilities. For external API programs: Your API is your product and your customer is a potentially a developer or partner. API discovery in this context also focuses on providing a digital storefront where your API users can find what you have to offer, learn about what value it has, and take the first steps</description>
      </item>
      <item>
         <title>Transforming Financial Services for 'Mobile-First' Customers: Q&amp;A with M&amp;T Bank - Layer 7® API Management</title>
         <link>https://www.broadcom.com/sw-tech-blogs/api/transforming-financial-services-for-mobile-first-customers-q-a-with-m-t-bank-layer-7-api-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/api/transforming-financial-services-for-mobile-first-customers-q-a-with-m-t-bank-layer-7-api-management</guid>
         <pubDate>November 8, 2017</pubDate>
         <description>I caught up with CA World presenter, David Januchowski, about the evolution of M&amp;T Bank€™s relationship with CA Technologies, the role of technology in financial services, and the impact of security and API management on the mobile-first experience. Our conversation CO: Hi Dave! Thanks for taking the time to speak with me today. To start, can you give us a bit of background about yourself and your current role at M&amp;T Bank? DJ: Glad to speak with you. I€™ve been an Enterprise Architect with M&amp;T Bank for two and a half years now. Before that, I worked as an Enterprise Architect, Solutions Architect, and Developer in the Healthcare and Financial Services industries for over 20 years. I€™m currently leading projects at M&amp;T Bank related to our initiatives in commercial, credit, and digital banking and payments. In particular, I€™ve been very focused on how technology can solve business problems for financial services. CO: Absolutely. Technology as a business driver is a big focus here at CA as well. Which leads to my next question. What challenges were you hoping to solve at M&amp;T Bank through working with CA? DJ: M&amp;T Bank needed to connect our native mobile app to enterprise services, and we needed a way to expose our services to external vendors and partners through APIs. CO: What was the process of selecting CA API Gateway in particular? DJ: M&amp;T has a long partnership with CA, specifically around CA€™s security suite. For this new API project, we carefully evaluated leading API solutions in the marketplace and established criteria that were critical for M&amp;T, including: governance, API security, traffic orchestration and caching, and cloud and legacy integration, among others. After performing multiple onsite and offsite reviews, we determined that CA API Gateway was the solution for M&amp;T Bank. CO: What business</description>
      </item>
      <item>
         <title>The Critical Guide to Software Defined Networking</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/the-critical-guide-to-software-defined-networking</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/the-critical-guide-to-software-defined-networking</guid>
         <pubDate>July 9, 2018</pubDate>
         <description>A monitoring handbook for NetOps leaders and practitioners to ensure successful software defined networking deployments. Recent analyst research reveals that 81% of enterprises have or plan to deploy SDN and NFV technologies in the next 12 months but most have admitted that their current monitoring tools DO NOT support these modern network architectures Software defined networking (SDN) means networks will grow and shrink based on user demand, as virtual network devices come and go, move between hypervisors and chew up resources in a heartbeat. The days of &quot;set it and forget it&quot; network administration and monitoring are gone, and your monitoring tools currently deployed won't be able handle these increasingly dynamic, virtualized and complex modern network architectures. Swivel Chair Monitoring Won't Cut it Anymore Because of rapid advances in networking technology coupled with user demands, network managers often find themselves with a multiplicity of tools, each designed to manage or monitor a single aspect of the enterprise network and application performance. The result is that over 50% of enterprises use 10 or more network monitoring/troubleshooting tools and spend 71% of their day fighting fires. The problems created by relying on swivel-chair monitoring are many. First, there is the duplication of effort involved in managing multiple tools and interfaces. Then there is the uncertainty brought on by having to rely on a variety of tools for various infrastructure components. Just keeping track of which tool is used for which network element can induce stress. You Can't Monitor What you Can't See (Full Stack Convergence for New Visibility) Perhaps the biggest problem created by this fragmentation is a lack of end-to-end network visibility, cited as the No. 1 challenge to successful network operations in the same EMA report. Additionally, the migration to sotware defined networking technologies will not happen overnight. Very few</description>
      </item>
      <item>
         <title>15 Project Portfolio Features You Need (Part 1)</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/15-project-portfolio-management-features-designed-to-help-your-key-business-initiatives-succeed-part-1-clarity-ppm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/15-project-portfolio-management-features-designed-to-help-your-key-business-initiatives-succeed-part-1-clarity-ppm</guid>
         <pubDate>July 29, 2018</pubDate>
         <description>The first in a three-part series on how redesigned CA Project Portfolio Management (CA PPM) features help businesses solve their most pressing issues with ease. Back in 2015, the Project Management Institute's Pulse of the Profession report stated that €œon average, 64 percent of all projects are successful.&quot; Success rates, it noted, had been stuck at that level for years, and most businesses were using tools outside their project portfolio management (PPM) systems (email, spreadsheets, Microsoft PowerPoint®, etc.) to work through their processes. At CA, we executed our own research study to gain a greater understanding of how our future product development could lessen the need for external tools and at the same time raise project success rates. Today's CA Project Portfolio Management (CA PPM) tool reflects that, with literally dozens of improvements over the last few years. Five of them are detailed below. 1. A modern, social user experience Each day, workers become more accustomed to living in an app economy where communication is easy and technology simplifies their lives. But enterprise tools–including many PPM solutions–haven't kept up. They don't simplify everyday tasks, don't facilitate in-context communication and sometimes don't even provide a way to see what people are working on without navigating through multiple screens. As a result, we redesigned CA PPM to be faster, easier and more intuitive. Everyday tasks are simpler, collaboration is enhanced, visibility is comprehensive and organizations' most pressing issues can be resolved without having to export data. We've transformed CA PPM into an easy-to-use, single source of information for all types of projects, regardless of the manager, department or team. 2. Project blueprinting In our study, we found that teams were navigating through reams of information that had no relevance to them or the tasks they were trying to accomplish. The data IT</description>
      </item>
      <item>
         <title>Are You Ready To Jump on the SD-WAN Bandwagon?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/are-you-ready-to-jump-on-the-sd-wan-bandwagon</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/are-you-ready-to-jump-on-the-sd-wan-bandwagon</guid>
         <pubDate>March 28, 2018</pubDate>
         <description>SD-WAN and pathway to the cloud assurance means a unified and comprehensive approach to network monitoring. In today's fast-paced digital world that challenges traditional approaches with emerging technologies, innovative network strategies can provide organizations with that all-important competitive edge. The wide area network (WAN) has been the backbone of multi-site enterprises for decades, but it's entering a new era. As organizations move more workloads to the cloud, the WAN will need to graduate to a new level of intelligence to ensure itself as a resilient pathway to a reliable cloud experience. Top of the charts SD-WANs are leading the software-defined networking charge, with 70 percent of businesses planning to adopt the technology in some form in the next 18 months with a 20 percent expectation in cost savings. So why the change of direction? The main reason is improved application performance. Better network security and reduced operational costs are also cited as key business drivers. But cost savings, security and application performance benefits will only be realized if network operations evolve their network performance monitoring tools and practices to monitor this new, smarter technology. SD-WAN introduces software-defined intelligence to regulate enterprise WAN for optimal application experiences. Yet, the enterprise needs to monitor and validate this intelligence along with their traditional network for full assurance. As SD-WAN intelligence and automation increases, the need for deep visibility into both the intelligence (control plane) and application traffic (data plane) increases, while the complexity of monitoring and correlating the underlying technologies should decrease. Software-defined architectures will usually include multiple vendor technologies in most deployments. The need to plan and model capacity needs across disparate SD-WAN providers and technology is paramount to assure the application experience. Figure 1: SD-WAN NetOps Performance Dashboards from CA Technologies A rock star monitoring approach As more critical applications migrate</description>
      </item>
      <item>
         <title>Monitoring Cloud Databases Versus Traditional - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/monitoring-cloud-databases-versus-traditional-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/monitoring-cloud-databases-versus-traditional-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>April 22, 2018</pubDate>
         <description>Achieve end-to-end application response visibility and monitoring in Cisco ACI Environments. Many companies today choose a Cisco networking infrastructure to service their physical and virtual networking needs for enterprise data center operations. These enterprises also plan to migrate to the latest software defined networking (SDN) technologies to help network operations (NetOps) deploy network services quickly to respond to competitive conditions and user demand. Cisco is incorporating various new technologies, like Cisco Application Centric Infrastructure (Cisco ACI) and software defined networking (SDN) into its networking equipment but these new technologies can cause disruptions in your existing monitoring strategies. This includes mirroring technologies for packet and flow data, e.g. switched port analyzer (SPAN), remote SPAN (RSPAN), encapsulated remote SPAN (ERSPAN), and VLAN access-list (VACL) that have issues with encapsulation and other new networking technologies. All of this creates a need to have comprehensive network visibility to overcome any limitations and maximize the use of Cisco equipment; while at the same time having an advanced network monitoring strategy that enables measurement of application performance on the underlying network. Let's review a few challenges with packet capture in a Cisco ACI environment and then discuss the CA solution that overcomes these challenges and enables proactive network troubleshooting and triage. Challenges of Data Visibility with Cisco ACI The Cisco ACI architecture focuses on distributed applications. It uses a centralized controller and an overlay structure to create, deliver and automate application policies throughout the network. Access to data monitoring can be accomplished either by use of network TAPs or SPAN-related technology, depending upon the architecture implementation. However, issues like duplicate packets and the need for data filtering capabilities still exist and can create a significant burden for many network tools. For instance, redundant traffic streams and a distributed leaf and spine architecture means that one should</description>
      </item>
      <item>
         <title>15 Project Portfolio Management Features Designed to Help Your Key Business Initiatives Succeed: Part 3 - Clarity PPM</title>
         <link>https://www.broadcom.com/sw-tech-blogs/clarity/15-project-portfolio-management-features-designed-to-help-your-key-business-initiatives-succeed-part-3-clarity-ppm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/clarity/15-project-portfolio-management-features-designed-to-help-your-key-business-initiatives-succeed-part-3-clarity-ppm</guid>
         <pubDate>October 6, 2018</pubDate>
         <description>The second in a three-part series on how CA PPM helps businesses solve their most pressing issues with project portfolio management features focused on business intelligence. In our conversations with customers over the last several years, we've noted ever-increasing interest in business intelligence. Today's organizations want dependable, up-to-date data that can be used to make more impactful and more meaningful business decisions-no surprise in this highly disruptive, fast-paced environment. In response, CA Technologies has taken significant steps toward providing organizations with the intelligence they need to guide their businesses through everything from the most aggressive competitive threats to understanding the potential (and real cost) of each new opportunity. Here are five new features that shine the spotlight on project portfolio management (PPM) business intelligence: 11. Self-service business intelligence (BI) The right data warehouse can deliver real competitive advantages and adding BI tools designed to fully exploit the data they collect is like adding a turbo engine to a Mustang: performance multiplied. The only problem is that legacy BI tools can be expensive and time-consuming. If your BI tool is outdated, you can easily spend hundreds of hours a year writing queries, scrubbing data and generating ad-hoc reports - i.e., losing valuable lap time. At the other end of the track, modern BI tools are interactive. Their simplified dashboards provide anytime, anywhere access to information. And their simplified processes reduce the need for data experts: improved lap time. CA Project &amp; Portfolio Management (CA PPM) was designed for the easy extraction of information. We€™ve now combined the CA PPM data warehouse with innovative BI tools like JasperSoft, Power BI, Tableau, Qlik and other solutions to provide extended project, resource and financial information. The result is self-service portfolio analytics, powerful data visualization capabilities, personalized dashboards and 360-degree views of the business and</description>
      </item>
      <item>
         <title>Creating Business Agility and New Customer Experiences with APIs and Microservices – A Q&amp;A with Ávoris</title>
         <link>https://www.broadcom.com/sw-tech-blogs/api/creating-business-agility-and-new-customer-experiences-with-apis-and-microservices-a-q-a-with-voris</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/api/creating-business-agility-and-new-customer-experiences-with-apis-and-microservices-a-q-a-with-voris</guid>
         <pubDate>November 9, 2017</pubDate>
         <description>How can I get my organization to quickly adapt to changing customer expectations? How can we deliver a better customer experience than the competition? What can I do to disrupt, rather than be disrupted? These are all top questions we hear Line of Business executives and CTOs asking when digital transformation initiatives are top of mind. IT capabilities are at the core of either enabling new business opportunities or being a barrier to getting things done. The state of legacy systems and application architectures, culture, and processes all determine an organization’s ability to make their data useable, launch new apps for every channel customers want to use and adapt to new technology patterns such as IoT. Speed is of the essence with competitors on your heels and those businesses that have adopted agile application architectures are finding themselves ready to compete and win. For example, the travel industry is a highly competitive and fragmented arena with many options for consumers to digitally explore and book travel options. Being able to identify trends in customer behavior and deliver easy-to-use, always available, and personalized digital experiences are critical to keep customers coming back. With CA World 2017 just around the corner, I had the opportunity to interview one of our speakers Joan Barceló (@joan_barcelo), IT Architecture and Development Manager at Ávoris, about their mission to enable their business to deliver exceptional experiences and quickly adapt to change. Ávoris, Reinventing Travel, is a leader in the travel industry that’s been around for over 85 years. They have over 700 travel agencies, 3,000 employees and even an airline that make vacation dreams come true for over 2.2 million travelers. Ávoris is the travel company of Barceló Group, with more than 50,000 rooms in 21 countries around the world. Let’s see what Joan has to</description>
      </item>
      <item>
         <title>CA's Network Monitoring OpenAPI App of the Month</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/ca-s-network-monitoring-openapi-app-of-the-month</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/ca-s-network-monitoring-openapi-app-of-the-month</guid>
         <pubDate>March 26, 2018</pubDate>
         <description>Each month I will call out a particular network operations OpenAPI app which is available on the CA Performance Management GitHub site. The goal is to provide visibility into existing network monitoring applications available for use as well as to provide inspiration to those interested in creating or customizing their own OpenAPI Apps for improved network visibility. This week I'm going to take a step back in time to look at a problem which is universal to SNMP based network monitoring and then show you how a little modern technology can go a long way to solving some NetOps challenges. It will also help to understand an example where CA eHealth did this very well. In my travels and conversations with customers, I hear a consistent message that &quot;it's too hard to find out why items are not collecting data&quot;.Basically, figure out if there are polling errors related to what I'm trying to look at. And if there are, get some basic insights into what happened and what types of errors are being generated (timeouts or other SNMP related errors). I knew immediately that this was something that the OpenAPI App platform could handle quite well. Basic queries with some simple visualizations and a little something special I'll get into in a minute. CA eHealth customers had some solid best practices built up over the years which leveraged very simple €“ but effective €“ logging and visualization features coupled with a little elbow grease. Figure 1: CA eHealth leveraged simple but effective logging and visualization features for network monitoring. Basically, you could view how many €˜good' &amp; €˜bad' polls have occurred recently with red being €˜bad' and green being €˜good' in a basic bar chart. There was also a simple text log file that would record insights into what types</description>
      </item>
      <item>
         <title>API Analytics for API Program Success - Layer 7® API Management</title>
         <link>https://www.broadcom.com/sw-tech-blogs/api/api-analytics-for-api-program-success-layer-7-api-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/api/api-analytics-for-api-program-success-layer-7-api-management</guid>
         <pubDate>August 2, 2018</pubDate>
         <description>As your business creates more APIs to enable developers and partners, integrate systems and deliver better customer experiences, API teams need detailed API Analytics to provide feedback into their design and planning process, and business leaders need to understand how their digital initiatives are performing. Knowing which developers and apps are using your APIs, where, and how much, helps you understand overall performance. API analytics are critical for troubleshooting issues, planning capacity, ensuring adherence to Service Level Agreements and knowing that you're providing a positive experience for API consumers. A single place for architects, developers and business users to get the full picture of API performance across an organization is critical for the success of your API programs. We'll take a look at how organizations use CA API Management, as an example, to dive into the importance of real-time analytics, customizable reports, creating sharable dashboards, and using an analytics API to integrate with other tools. API Analytics Dashboards A central location teams can get API analytics creates consistency in reporting and saves organizations tons of time and effort in trying to build their own holistic API program view or custom reporting across different groups. When API teams login to CA API Management they see the default dashboard that provides a total picture of API program health at a glance. This dashboard provides a quick snapshot of important metrics displayed in real-time for all your APIs and associated apps, and it can be customized to suit your organization's KPIs. Visual indicators of great or lacklustre performance can be drilled down into to see reports that give deeper insights on specific APIs or apps. As an API can be used by multiple apps, and vice versa, the reports allow a user to specify which API or app they want to see usage</description>
      </item>
      <item>
         <title>AutoSys Workload Automation Gets Its Mobile App</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/autosys-workload-automation-gets-its-mobile-app</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/autosys-workload-automation-gets-its-mobile-app</guid>
         <pubDate>May 20, 2019</pubDate>
         <description>Most of us have a smart phone or other mobile devices where we use apps €“ to play games, get step-by-step directions, access news, books, weather, and more. But now there is even more use of apps in the AutoSys world. Powerless to intervene while out of the office If you're either an admin or user of AutoSys Workload Automation, one or both these scenarios probably strikes a responsive chord: Scenario #1 - an AutoSys Workload Automation Admin receives an urgent request to Force Start or Kill a job. The Regular on call process fails. An escalation person is available, but is away from home or has no access to corporate systems Scenario #2 - a subset of jobs stops executing, and STARTJOBFAIL alarms are issued, but the AutoSys Admin is out of office and has no backup. A game changer for AutoSys Workload Automation users Up until now you'd have been powerless to intervene while out of the office. But from now-on things can be totally different, and this is a game changer for AutoSys Workload Automation users: In Scenario #1 - the escalation person is able to access AutoSys and take the requested action from their smartphone, and actions taken are logged in both EEM and €˜autotrack' In Scenario #2 - the AutoSys Admin is nonetheless able to check status of affected machine(s) via their mobile device and assist with troubleshooting. Using AutoSight, a mobile app developed by Extra Technology, a CA Technologies Workload Automation Partner, you can view and manage your AutoSys Workload Automation jobs and workflows on your mobile device wherever you are, responding to problems and alerts while out of the office and without a laptop. There is quite a bit of energy spreading throughout the AutoSys Community right now with the availability of AutoSight</description>
      </item>
      <item>
         <title>API Strategy and Design - Your First Stop in Full Lifecycle API Management - Layer 7® API Management</title>
         <link>https://www.broadcom.com/sw-tech-blogs/api/api-strategy-and-design-your-first-stop-in-full-lifecycle-api-management-layer-7-api-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/api/api-strategy-and-design-your-first-stop-in-full-lifecycle-api-management-layer-7-api-management</guid>
         <pubDate>September 24, 2017</pubDate>
         <description>While enterprises often talk about API Strategy and API Design as one, they are not€Š€”€Šand like the cart, one comes before the other We often hear API Strategy and API Design as a single topic - and while they are both an important component of the &quot;Plan&quot; phase in Full Lifecycle API Management, they are nonetheless two components that need to be considered separately. A means to and end? In the Application Economy, an API Strategy is critical to digital success. Whether it's to provide a superior digital experience, to grow markets and revenue streams, connect employees and partners, or launching an innovative new service, successfully executing a business strategy requires the ability to launch new apps and (if applicable) to coordinate your digital presence with partners. The most efficient way to do this is through APIs€Š€”€Šbut APIs are the means to the end, but not the end itself€Š€”€Šfor more on this check out our webinar: Mastering Digital Channels Through APIs. So, before building or designing APIs you need to implement an API strategy that should address four key requirements: 1. Alignment and Usefulness: You should know your business goals, and how APIs help achieve those goals You should ensure that the API will have a future value Look for gaps in your industry to exploit through APIs (alternatively, look to see if someone is disrupting your industry through APIs) 2. Engagement and Usability: APIs should be easy for your developers to access and use Examine your target developer€™s tool needs, and ensure you can integrate with them 3. Scalability and evolvability: APIs should adapt to the needs of the business, with a solid enterprise architecture around them A versioning methodology needs to be in place 4. Manageability and sustainability: It should be easy to see and control an APIs</description>
      </item>
      <item>
         <title>OpenAPIs for Software Defined Networking Harmony</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/openapis-for-software-defined-networking-harmony</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/openapis-for-software-defined-networking-harmony</guid>
         <pubDate>May 19, 2018</pubDate>
         <description>I love music – both listening to it and playing it. A great track or performance rarely involves just one element; it needs input from various sources. And all those sources need to come together at the same time in perfect harmony. A software defined networking (SDN) architecture is equally reliant on multiple different elements. If just one of those elements is out of tune, then the whole stack can come crashing down – along with the customer experience it supports. When it comes to the customer experience, there is zero tolerance for discord. Everything has to work every time, all the time. To ensure software defined networking technologies perform in unison with traditional infrastructure, organizations need to be able to integrate not only multi-platform and multi-vendor elements but also the associated network monitoring data and dashboards. Singing the same software defined networking tune In a perfect world, every networking component from every networking vendor would interoperate. But that perfect world has yet to materialize: vendor hardware has its own proprietary language, protocols are not interchangeable; and analytics don’t use the same baselines. This doesn’t just impact network monitoring and performance, it impacts application availability and the customer experience. Open application programing interfaces (APIs) can help bridge these gaps and ensure that both SDN and traditional networks that it supports never miss a beat. There are two key API groups that matter in the SDN world – and they can be either open or proprietary. Southbound APIs: used by the SDN controller to push configuration and state information to network devices, such as switches and routers. These APIs facilitate efficient control over the network and enable changes to be executed in real-time based on business and user demands. Northbound APIs: used to communicate between the SDN controller and the services</description>
      </item>
      <item>
         <title>Self-Driving Cars? Where's My Self-Driving App?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/self-driving-cars-where-s-my-self-driving-app</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/self-driving-cars-where-s-my-self-driving-app</guid>
         <pubDate>June 8, 2018</pubDate>
         <description>Those lucky enough to own a Tesla marvel at how the Autopilot feature enables handsfree driving but more interesting (to me at least) is how the system behind the scenes works. Each Tesla is a sophisticated data collector that pushes sensor info to a massive shared database. Paired with machine learning algorithms, this enables what Tesla calls fleet learning. Initially, the vehicle fleet is a passive recorder – noting the position of road signs, bridges and other stationary objects. Real-world driver actions are also recorded and compared to what Autopilot would have hypothetically done in that same scenario. Their machine learning algorithms create what is essentially a geocoded white-list of radar-recognized objects. This list is designed to prevent false alarms – like auto-braking for a road sign that might initially appear to be on a collision course but just happens to be posted on a rise in the road. When enough cars (sensors) observe and report the same safe driver action, the object is white-listed. False breaking events are eliminated as fleet learning intelligently learns what are true “alerts.” In the world of ITOps we are all too familiar with a deluge of data that can generate false alarms. There's even a name for it: &quot;alert fatigue.&quot; Containerized apps and microservices have an ephemeral nature that can create an exponential increase in number of events to process as compared with traditional architectures. The sheer volume and velocity of data now surpasses a human's cognitive threshold. Using the Tesla analogy, why can't these systems become self-learning such that apps auto-remediate and fix problems without human intervention? The short answer is that they can and the concept of a self-driving app is being made real today through a combination of machine learning and artificial intelligence commonly referred to as AIOps or Artificial</description>
      </item>
      <item>
         <title>PODCAST: Today's Network Monitoring Challenges, A Discussion with Robert Kettles, Broadcom Sr NetOps Consultant - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/podcast-today-s-network-monitoring-challenges-a-discussion-with-robert-kettles-broadcom-sr-netops-consultant-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/podcast-today-s-network-monitoring-challenges-a-discussion-with-robert-kettles-broadcom-sr-netops-consultant-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>June 5, 2019</pubDate>
         <description>



Robert Kettles started off as a field engineer at Cabletron Systems supporting LAN/WAN switching and routing solutions along with their relatively new network management platform:  Spectrum.  Over two decades layer, he continues to help customers solve network fault and performance management challenges; particularly in the telecommunications and financial services sectors.  His recent work involves advanced event correlation procedures, integration of Syslog data, and enhancing tool adoption and collaboration with various customers.




Robert received a Bachelor of Science degree in Computer Engineering and a Masters of Science degree in Information Systems Engineering from Polytechnic University (now the NYU Tandon School of Engineering).  He is an active contributor to the CA Communities forums.       

LinkedIn:  https://www.linkedin.com/in/robert-kettles-375459
</description>
      </item>
      <item>
         <title>Launching DX AIOps: It’s All About the Digital Experience</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/launching-dx-aiops-it-s-all-about-the-digital-experience</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/launching-dx-aiops-it-s-all-about-the-digital-experience</guid>
         <pubDate>June 2, 2019</pubDate>
         <description>Delivering innovative, reliable, and responsive digital experiences is a vital competitive imperative for today's businesses. However, while the need to optimize the digital experience continues to get more critical, it also continues to get more difficult. Read on to learn more about the launch of DX AIOps our new artificial intelligence for IT operations (AIOps) platform, which enables teams to optimize digital interactions in today's modern environments. DX AIOps Unveiled: Digital Experience the Focus CA Technologies, a Broadcom company, has an extensive track record of delivering solutions that help our customers provide winning digital experiences, and we continue to focus our innovation in this area. Our launch of the DX AIOps Platform is just the most recent example of how our digital experience focus is yielding advanced solutions. DX AIOps is a leading platform that enables organizations to optimize digital experiences, while contending with environments that are increasingly dynamic, hybrid, and distributed in nature. DX AIOps represents a single, unified platform that features AIOps and intelligent automation capabilities. By providing these advanced, extensive capabilities in a unified platform, we enable our customers to more quickly, efficiently, and fully capitalize on the promise of AIOps. DX AIOps delivers all the capabilities IT teams need, addressing the &quot;four A's&quot; of AIOps: Acquire. DX AIOps integrates digital experience, application performance, infrastructure and network monitoring services to deliver new levels of visibility across your entire digital delivery chain. DX AIOps monitors your entire stack with a single solution, minimizing wars rooms and finger pointing associated with tool sprawl. Only DX AIOps cross correlates every component of your full stack, from the application, the underlying infrastructure, and the experience of your users allowing you to see how everything in your stack is connected. Aggregate. DX AIOps equips customers with the broadest monitoring coverage of modern</description>
      </item>
      <item>
         <title>How can your Center of Excellence help to reduce alarm fatigue?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/how-can-your-center-of-excellence-help-to-reduce-alarm-fatigue-ca-automation</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/how-can-your-center-of-excellence-help-to-reduce-alarm-fatigue-ca-automation</guid>
         <pubDate>July 29, 2019</pubDate>
         <description>When I talk with people about Digital Transformation there is always the thinking about how they can digitize business processes, increase the speed of delivery and reduce costs. This is a very simplistic view and does not provide enough value to the business – simply scripting will not increase your quality and in the cloud world a tough challenge. Your services are running everywhere, from your datacenter to multiple cloud suppliers, each has its own set of management and monitoring tools which makes control and visibility of your business processes more complex. At the same time, the expectation of IT has grown dramatically, it’s no longer about downtime and availability. It’s about agility, quality, and speed. Slow is the new downtime. For example, 53% of visits are abandoned if a mobile site takes longer than three seconds to load. Downtime is very expensive. According to Gartner, the average cost of IT downtime is $5,600 per minute, each of these has significant potential for reputational damage and lost revenues. So simply trying to go faster is not the best for the business. Insight and Growth of Complexity With the distribution of processing across hybrid environments everything has got far more complex. An ever-increasing number of monitoring tools that are disconnected from the enterprise processes has significantly increased the number of alarms we have to react to. That has created more pressures for Enterprise IT to deliver the services the business and our customers expect. 72% of IT organizations rely on up to nine different IT monitoring tools to support modern applications. Keep in mind: this is the situation before they started their digital transformation initiatives. According to the same survey, 47% experience on average more than 50,000 alerts per month. Whenever an alert activates, it requires identification and verification to initiate</description>
      </item>
      <item>
         <title>AI in IT operations Solid or splashy - Put AI to Work to Your</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/ai-in-it-operations-solid-or-splashy-put-ai-to-work-to-your</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/ai-in-it-operations-solid-or-splashy-put-ai-to-work-to-your</guid>
         <pubDate>June 7, 2018</pubDate>
         <description>The AI craze has taken the world by storm. Businesses are looking to adopt AI based applications and technologies to stay competitive and provide a better experience to their customers. Data scientists are the newest unicorns, claiming the fastest growth rate in the jobs market. While it is true that there is a lot of “AI-washing” happening, there is solid, tangible value of AI in certain aspects of businesses such as within IT operations or also known as AIOps. Why AIOps is needed in the first place? To better understand the value of AIOps, we need to take a step back and look at the problems it’s trying to solve. As businesses become more digital and focused on customer experience, the sheer volume, velocity and variety of data grows. Secondly as IT adopt dynamic architectures or technologies alongside operating traditional environments the complexity rises. More data, higher complexity and stakes of providing a superior customer experience means IT teams will struggle to keep up. Existing reactive tools will not cut it anymore. Finding and resolving issues is like finding the needle in a haystack with existing tools. Experience is everything in the digital economy, IT teams can simply no longer afford to take hours to triage issues in a firefighting mode. AI based analytics can augment existing expertise of IT staff and help them find, resolve and optimize business applications and infrastructure a lot faster. Why AIOps now? You may say AI and related theories have been here for years so why is it ready to adopt it now? Well we are in an exciting time. The technologies that enable or are a pre-requites for AI are widely available and affordable now. Especially Big Data technologies such as Apache Hadoop and Apache Spark which are key enablers for AI and</description>
      </item>
      <item>
         <title>Announcing Simplified DevOps and Extensible Data Source Framework for CA Live API Creator - Layer 7® API Management</title>
         <link>https://www.broadcom.com/sw-tech-blogs/api/announcing-simplified-devops-and-extensible-data-source-framework-for-ca-live-api-creator-layer-7-api-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/api/announcing-simplified-devops-and-extensible-data-source-framework-for-ca-live-api-creator-layer-7-api-management</guid>
         <pubDate>August 29, 2018</pubDate>
         <description>We have successfully released version 5.0 of CA Live API Creator and as an engineer who is a part of the development team, I am very excited to talk about a couple of new and exciting features that we are introducing in this version. For 5.0, our focus was primarily on enhancing the DevOps aspect of the product and expanding the reach of the product with respect to its data source support by enabling a new Extensible Data Source Framework. Simplified DevOps In version 5.0, with DevOps in mind, we decided to move away from using a central database to store the product metadata. Instead, we store this information in the file system and in formats that are easily readable and understandable (JSON and JS). We call this directory the Admin Repository and this database less approach enabled us to simplify the DevOps processes as detailed below Lifecycle operation. The entire process can be scripted by using a scripting language of your choice and/or by using the CA Live API Creator Admin command-line interface (Admin CLI) The API-to-API server deployment from source control system (SCS) artifacts. For example, you can save development artifacts and export admin contents into a file for maintenance into an SCS (such as the export artifact). The creation of APIs into an API server in a production system. Bootstrap API deployments by pointing to an admin repository to load the APIs and team spaces from. The following illustration depicts a typical DevOps workflow using CA Live API Creator 5.0: Enhanced Team development The shift from using a central database to an Admin Repository also enabled us to rethink our approach to simplify team development. With 5.0, as the API developer changes their APIs and team spaces, these changes are synchronized with the Admin Repository. Adding this</description>
      </item>
      <item>
         <title>Adopt Technologies Faster with REST APIs in CA UIM 9.0.2</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/adopt-technologies-faster-with-rest-apis-in-ca-uim-9-0-2</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/adopt-technologies-faster-with-rest-apis-in-ca-uim-9-0-2</guid>
         <pubDate>September 12, 2018</pubDate>
         <description>Infrastructure monitoring tools, like CA Unified Infrastructure Management, provide out of the box support for over 200 technologies through monitoring probes. These probes give IT administrators the ability to monitor everything from traditional systems to modern cloud services across multiple platform and OS combinations. Traditionally, each technology that needed to be monitored required a separate probe since the interface through which monitoring metrics (i.e QoS data points) could be retrieved by the device was either proprietary or technology specific. Monitoring newer technologies became a project spanning multiple agile sprints and required valuable man hours. Average turnaround time to add any new technology was 4 - 6 weeks. It was a problem that needed to be solved. Leveraging RESTful APIs in CA UIM As the industry matured and client server communication became API driven, infrastructure vendors also started exposing API/s for monitoring data and alarms. One of the most popular types of€¯APIs€¯is REST or, as they're sometimes known, RESTful APIs. REST or RESTful APIs were designed to take advantage of existing protocols. While REST - or Representational State Transfer - can be used over nearly any protocol, it generally takes advantage of HTTP when used over the web. One of the key advantages of REST APIs is that they provide a great deal of flexibility. Data is not tied to resources or methods, so REST can handle multiple types of calls, return different data formats and even change structurally with the correct implementation of hypermedia. To reduce the release time for supporting any new technology, we experimented internally to come up with monitoring probes leveraging RESTful APIs. The first probe released based on this approach was a monitoring solution for a popular storage technology XtremIO, which took 2 weeks from inception to release. Looking at the efficiency of this REST based</description>
      </item>
      <item>
         <title>Watermelon Status: Green on the Outside, Red on the Inside</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/watermelon-status-green-on-the-outside-red-on-the-inside-rally-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/watermelon-status-green-on-the-outside-red-on-the-inside-rally-software</guid>
         <pubDate>October 3, 2018</pubDate>
         <description>I attended the Agile 2018 conference back in August where I learned of the term “Watermelon Status” for the first time. Watermelon Status… sounds delicious, right? What could possibly be wrong with anything resembling this rotund summer fruit? Can you imagine being the first person to discover a watermelon? Picture this: you stumble upon a lush patch of green-striped fruit, intertwined together with green vines and leaves. Curiosity strikes, and to your surprise, after a quick swing of your machete reveals a vibrant, red flesh that is both edible and delicious! Similar to the “don’t judge a book by its cover” analogy, it is clear that not everything is what it appears to be at the outsider’s first glance. Now picture this: it’s a Tuesday morning in the office and your scrum team gathers for daily standup. You go around the horn sharing “This is what I did yesterday…, this is what I will do today…, this is what is blocking my progress…” How many times have we stated that we have “no blocks… no blocks…. no blocks…” even if we did, but just didn’t want to reveal publicly to the team? Or because we didn’t properly prepare our daily update to succinctly identify the things that were hindering our progress? Often times when we are consumed in a cycle of routine, we develop decision fatigue, where we tend to follow the path of least resistance. It’s all green, green, green… until it’s red When we practice watermelon status, we delude ourselves and others into thinking that the statuses of our work are “green” or “on track”, when in reality it is not. Practicing watermelon status can result poorly at sprint reviews when the promised deliverables are incomplete or unsatisfactory, and teams must now face the reality of issues that</description>
      </item>
      <item>
         <title>Cloud Provider Vs Enterprise Wide Infrastructure Monitoring Tools</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/cloud-provider-vs-enterprise-wide-infrastructure-monitoring-tools</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/cloud-provider-vs-enterprise-wide-infrastructure-monitoring-tools</guid>
         <pubDate>March 27, 2018</pubDate>
         <description>A couple of groups in your organizations have started to adopt public cloud for their applications. Following their success, your VP wants you to evaluate public cloud as a potential platform for some of the traditional applications and workloads. You investigate and conclude that some are suited for the cloud and some should remain on premise. Now you want to finalize your strategy. You are not sure whether to use the free monitoring tool offered by your cloud provider or continue investment in your enterprise monitoring tool (such as CA Unified Infrastructure Management- CA UIM). Does this sound familiar? Pretty much every enterprise IT executive goes through this dilemma. Provided your existing tool supports . Here are three key reasons that you should opt for an enterprise wide monitoring tool: 1. Deeper cloud visibility Even though cloud provider tools provide rich insights, they are still limited in certain cases. For example Amazon CloudWatch ( by AWS) cannot be used to monitor EC2 (cloud server) memory out of the box. In addition if you want deeper visibility into the services running on cloud system e.g. let's say an apache running on EC2, you would need to use an enterprise grade monitoring tool. CA UIM for example provides a rich set of monitoring probes based on some of the most popular technologies so you can get deep insights. Sample Memory Metric Chart 2.Visibility across hybrid infrastructure stack Majority of the enterprise organizations are rapidly adopting public cloud services but they are still going to have traditional on-premise infrastructure for at least the next five or more years. If you use a cloud provider tool, you will have to manage yet another tool. Multiple cloud providers will mean even more monitoring tools. Tracking down issues across hybrid applications (e.g. database in cloud but</description>
      </item>
      <item>
         <title>Deal of the Day: Is Your Mainframe Actually the Cheaper Option?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/deal-of-the-day-is-your-mainframe-actually-the-cheaper-option</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/deal-of-the-day-is-your-mainframe-actually-the-cheaper-option</guid>
         <pubDate>September 21, 2018</pubDate>
         <description>The primary driver of change is almost always cost-reduction-and that includes your decision about moving off the mainframe. Most company's platform transition assessment narrowly focuses on software and MSUs / MIPs costs. However, those who have transitioned off the mainframe often find that their OPEX costs have more than doubled while their workloads have remained stable-and all this after putting in years of effort into transitioning platforms. Let's consider the real cost of ownership: 96% of customers paid more to &quot;re-host&quot; data in distributed environments For 38% of customers, costs on the distributed platform doubled in comparison to System z Mainframe is clearly the &quot;deal of the day&quot;. And, there are ways to even further minimize the total cost of ownership (&quot;TCO&quot;) of your mainframe environment. Enterprises like the State of Oregon Enterprise Technology Services (&quot;ETS&quot;) continues to grow their mainframe environment-and is quickly realizing the benefits of that choice. ETS was able to deliver 15 percent more services at 20 percent lower cost on the mainframe. Scale with Your Business Needs The mainframe continues to provide the best transactional performance-a single server accomplishes more than all the Facebook servers combined. According to Marc Staimer of Dragon Consulting: &quot;CICS handles more than 1.1 million transactions per second worldwide. That's more than 95 billion transactions per day. To put that in perspective, Google searches average approximately 60,000 per second. Facebook likes average approximately 30,000 per second. Consider that a single IBM z System mainframe CICS can handle roughly as many transactions-up to 2.5 billion/day-as all of the Facebook servers combined. Consider the Costs Distributed environments are priced on a per-license basis. So, what does that mean for you? It may cost more to purchase applications and system management tools to run on multiple servers. Be cognizant of the terms and conditions.</description>
      </item>
      <item>
         <title>4 Steps to Keep in Mind When Executing your IT Monitoring Strategy</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/4-steps-to-keep-in-mind-when-executing-your-it-monitoring-strategy</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/4-steps-to-keep-in-mind-when-executing-your-it-monitoring-strategy</guid>
         <pubDate>February 25, 2018</pubDate>
         <description>Execution is the critical act actualizing your IT monitoring strategy effectively, it doesn't begin when most think it does though. Execution begins before you ever step on to the playing field and if you don't prepare you will miss opportunity. Using solid processes and scaling them as experience and situations present chances to learn is the only way to create a true practice within any scenario. Leveraging lessons learned in both failure and from success to invoke positive revisions of future iterations is the quintessence of forging stringer principles within the practice. Executing quickly, decisively, and with confidence should be the stated goal every time you step up, but let's dig in and understand how listening with the heart of a teacher allows for true improvements to even the best stratagem. 1. Test Execution begins in testing, in my opinion. The demarcation point of Strategizing and Executing is somewhat greyed but it's clear that without practice, the chances of success are naturally less. Showing up with genuine confidence when working with different technologies and teams takes experience and in the absence of actual firsthand knowledge, time spent working within the specific stacks, it's hard to grasp all the nuances with which the complex systems of today function under. I recently found myself somewhat flustered when the admins I was working with suggested to me I should use a different schematic as background for a Dashboard supporting DB2 since I was displaying a PureScale backend and not the HADR setup they were hoping for. Normally, this type of revelation causes disruption to the message, but through reassuring and quick thinking, CA Unified Infrastructure Management (CA UIM) rapidly allowed reaction to coincide with requirement shifts. 2. Listen Working with customers and admins provides a wealth of specific knowledge around their own implementations,</description>
      </item>
      <item>
         <title>Master the Monitoring of Your Container Environment - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/master-the-monitoring-of-your-container-environment-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/master-the-monitoring-of-your-container-environment-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>December 26, 2017</pubDate>
         <description>Cloud databases such as Amazon Relational Database Service (Amazon RDS) bring the power of the cloud to database functions, and present a unique opportunity to manage performance of modern cloud first, and cloud migrated applications. Without a doubt, many DBA's are already familiar with the traditional process of managing DB performance and troubleshooting in an on-premise environment. Many of these monitoring procedures still apply to the cloud. While access to the back-end infrastructure supporting these databases has traditionally been key to calculating performance, that is no longer an option with the cloud, and administrators must rely on information collected and published by their service provider. AWS CloudWatch, the AWS monitoring API, provides just that insight, without an agent based foot print. While it's valuable to see real time information for troubleshooting and performance analysis, there's a missing piece of the puzzle when it comes to identifying performance degradation over an extended period. More importantly, it's critical to understand normal behavior for high impact metrics, such as IOPS, for a given instance. Even when using the AWS monitoring service, its critical to track historical performance and trend projection, to accurately understand current application performance and service levels. Without the hypervisor perspective, visibility into cloud services and instances becomes increasingly difficult to see and understand in relation to a moving baseline. The typical performance measurement for metrics such as IOPS may depend on the hardware specifications available, therefor having an accurate baseline of what performance is on an hourly, daily, and weekly basis becomes key to identifying what WHEN and WHERE performance bottlenecks are happening. While cloud database services present their own unique challenges for remote management, these concepts apply across this new application stack. Accurately pinpointing service delivery through RDS shares the same challenge and cloud backend visibility gaps with many</description>
      </item>
      <item>
         <title>An Interview with SD-WAN Innovator, Viptela</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/an-interview-with-sd-wan-innovator-viptela</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/an-interview-with-sd-wan-innovator-viptela</guid>
         <pubDate>March 25, 2018</pubDate>
         <description>I recently sat down with Paul Kohler, Director, Technical Partnerships and Alliances at Viptela to get a better understanding of why SD-WAN is the hottest software defined networking (SDN) technology out in the market today and what it means for CA customers and their network transformation success. Paul, can you tell us a little bit about Viptela and the innovative SD-WAN technologies you have built? Viptela was founded five years ago on the desire to connect users to applications simply, reliably and securely. Our Software-Defined Wide Area Network (SD-WAN) technology has allowed global companies to build carrier agnostic, policy-controlled and cost-effective WANs. Viptela cuts existing operating costs in the WAN by more than 50% while increasing bandwidth 10x, and significantly improving security and uptime. As a result, we've been able to disrupt and transform the enterprise WAN that had been stagnant for many years. Cisco announced their intention to acquire Viptela on May 1, 2017. Why jump into SD-WAN for Viptela? What did you see happening in the market? There are two waves transforming the enterprise networking landscape: Migration from MPLS to Internet transport. Applications, workloads and storage moving from the enterprise data center to the cloud. As a result, enterprise customer network architectures need to be reconfigured to support demand for these resources that have now moved off-site. The other existing solutions available in the marketplace require much more operational expenses and time to manage, while other solutions don't have the required functionality that enterprises require. These factors created the conditions that provided Viptela with the opportunity to deliver a new solution to the market that addressed these needs. What are some challenges your customers have experienced in their SD-WAN deployments? And what are some lessons learned? A couple of items come to mind: Don't rely on PowerPoint presentations</description>
      </item>
      <item>
         <title>Future Proofing Business with Microservices and Docker Monitoring - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/future-proofing-business-with-microservices-and-docker-monitoring-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/future-proofing-business-with-microservices-and-docker-monitoring-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>August 12, 2017</pubDate>
         <description>Four Considerations When Selecting a Docker Monitoring Solution Imagine you're a cog in a well-oiled team that's architected a wonderful new business app comprising 50 plus microservices running across Docker containers (it's a small deployment). Pretty standard stuff, right? And everything's Docker-centric across dev, test, and pre-prod environments. The team is so DevOps awesome that they're even thinking about running Docker in production gasp! Perhaps it's a cloud delivery model, so you're using a number of elastic compute instances for each one of those container-housed microservice nuggets. Now let's get down to monitoring. No problemo I hear you say. Resilience is the mantra, so the team probably automates a swag load of health checks in short time intervals each and every day. Ok, it's resulting in an increase in events and alarms, but we can take it €“ we always have. Of course, monitoring doesn't end at the ops console. Because this is a business-critical app, there are some pretty important functional considerations too. Being services-driven, the team is laser-focused on capturing &quot;golden signals&quot; all google SRE style (latency; traffic; errors; saturation), or perhaps they go further, injecting app performance checks into Jenkins CI builds and analyzing cloud instance level performance metrics. But there is a pattern emerging and it isn't one of those funky analytics-based insights you keep reading about on the Interwebs. Nope, because of the dynamic (even ephemeral) nature of container architectures, microservice dependencies and API-centric communication, that sharp increase in alarms is becoming unmanageable. So much so that the notion of running Docker in production is not looking so good after all. This is tragic for business. Why? Because Dockerized applications and services are unequivocally the architectural fabric needed to engage customers at scale. Not next month or next year, now. Defer adoption due to concerns</description>
      </item>
      <item>
         <title>How Rally does Planning</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/how-rally-does-planning-rally-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/how-rally-does-planning-rally-software</guid>
         <pubDate>March 12, 2019</pubDate>
         <description>Rally is a company that was founded in 2001 by early agilists to create an agile product. We were the first to focus solely on agility, by people practicing agility, to create an agile product to be used by agile teams (teams of teams) and for every project type. Our product has always been SaaS, which means we did CI/CD before it was something everyone did. We have always continuously delivered. We scaled before others thought it was a thing. And we were an agile business from day 1. Before the term &quot;Business Agility&quot; ever came to be. So, what does this have to do with planning? Our story We've spent years learning and evolving. The best part of being a truly agile organization is that part of our DNA is to constantly search for new and better ways to work. We started planning quarterly in about 2005. This worked well for us in those days. The markets moved more slowly, and disruption wasn't yet a word that rolled off of everyone's tongues. We implemented Quarterly Steering to oversee the business side of planning and take on corporate issues. This allowed us to align as a business across the entire value stream and to include all departments. How we plan today Fast forward to today and the evolution is still ongoing. Today we plan monthly for product, and quarterly for the business. You see, we realized we had created separate meetings to align one business. We went back and reviewed our notes from previous years, and we realized we weren't totally synced. One meeting focused on product, one on business, one on corporate issues. Planning quarterly at the product level also gave us the ability to hide some organizational issues. If there is one thing we've learned over the years</description>
      </item>
      <item>
         <title>Network Monitoring Tools to Rule the Digital World</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/network-monitoring-tools-to-rule-the-digital-world</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/network-monitoring-tools-to-rule-the-digital-world</guid>
         <pubDate>September 17, 2018</pubDate>
         <description>Today's network assured by modern network monitoring tools are more important than ever before to protect your brand and the digital experience. Digital, social and mobile technologies have become so ubiquitous in our lives that we take their availability and quality for granted. We expect portals, apps and cloud services to respond immediately. We are completely spoiled and it's not just our expectations that are increasing; demand is too. For example, global networks now deliver four million Google searches every minute and over 200 billion emails and 40 million tweets every day. The network assured with advanced network monitoring tools are more important than ever before to deliver your brand and the digital experience. Operational overload Organizations are finding it harder to meet user demand and expectations as modern network infrastructures grow in volume and variety. Root causes are tougher to find. Architecture changes are problematic to track. User experiences are difficult to correlate. Modern networking architectures like SD-WAN and Cisco Application Centric Infrastructure (ACI) are a driving force in today's data centers. With the recent Cisco SD-WAN announcement, it is more imperative today then every before that network teams adopt a comprehensive and unified approach to monitoring traditional and modern network architectures. Why modern network monitoring tools for SD-WAN? Expansion of Cloud based applications and infrastructures coupled with non-guaranteed connectivity requires visibility and assurance across the WAN to ensure end-user experience of critical applications Software Defined Overlays add additional complexity which requires understanding and correlation of the Control and Data plane infrastructure SD-WAN introduces intelligence which needs to be validated and refined as the maturity of the technology and your deployment increases Rollout of SD-WAN requires the ability to plan and model capacity needs across disparate providers and technology CA's Network Operations Analytics solution is a unified, full-stack network</description>
      </item>
      <item>
         <title>The Agilist’s Guide to Summer 2019</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/the-agilist-s-guide-to-summer-2019-rally-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/the-agilist-s-guide-to-summer-2019-rally-software</guid>
         <pubDate>June 11, 2019</pubDate>
         <description>Summer's in full swing, which can only mean one thing: conference season. The team here at Rally Software has been busy including speaking at Agile and Beyond and XP 2019. More importantly, we're just getting started. Here's the latest news leading up to this summer's agile conferences. Rally Software is back. If you haven't heard already, the rumors are true - Rally Software is back. While the name Agile Central won't switch overnight, you may start to notice some changes, especially in how we show up at conferences and on the web. Rest assured, you'll see changes in the product and technical documentation as we continue to roll out this exciting name change. In the meantime, we apologize for any inconvenience or confusion regarding our name change. Rally Software is back Moving Mountains at Mile High Agile. We kicked off summer conference season in our own backyard in Denver, Colorado, USA (Rally was founded in Colorado). Every year, Agile Denver hosts the Mile High Agile conference as a way for local Agile and Lean practitioners to engage with the larger community across the Front Range. This year's focus was being fierce. That includes being fiercely protective of our teams, and across the organization, as we look to scale agile practices. We'd like to thank everyone who stopped by our booth for a t-shirt, demo, or conversation. Rally team volunteering at Mile High Agile Looking Ahead to Agile 2019. Our next stop is Agile 2019. And since this is the largest agile conference, we're pulling out all the stops as a title sponsor. Some things you can expect: an Agility Lounge featuring our in-house agile experts, a larger-than-life booth that highlights our revitalized product name, and most importantly, great conversations and demos of our product. While we won't spoil the fun</description>
      </item>
      <item>
         <title>Fantastic Voyage Along The Continuous Delivery Pipeline</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/fantastic-voyage-along-the-continuous-delivery-pipeline</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/fantastic-voyage-along-the-continuous-delivery-pipeline</guid>
         <pubDate>March 28, 2018</pubDate>
         <description>This is the second post in a series targeted at helping Product Managers understand the importance of Continuous Delivery. The first post in this series explored why Continuous Delivery is critical to making a great product that users love and that helps achieve your business objectives. Continuous Delivery enables faster feedback cycles, providing more opportunities to learn, iterate, and ultimately succeed. This post takes a deep dive into the world of Continuous Delivery by following along on the fantastic voyage of a single change as it travels through a Continuous Delivery process. I find it's often helpful to take a deep dive into a real example to truly understand something. Therefore much like the intrepid adventurers in the movie Fantastic Voyage, we're going to learn about Continuous Delivery from the inside out - following a single change as it travels through the Continuous Delivery process for the product I work on - CA Agile Central. Each change to CA Agile Central is continuously delivered when it's ready, about 20 times per day across 16 teams. The most common type of change is a front end (aka user interface) only change often to introduce a new feature, improve a feature, or fix a bug. Some changes also include changes to our backend services and APIs. In this post, we're going to dive into the details of a recent change to CA Agile Central - adding Work In Progress (WIP) Limits to Agile Central's new Team Board. It's important for teams to be able to define their own process that works best for them to accomplish their goals. Team Board is a new experience for teams to customize their process and easily iterate on improving it. Team Board was released in early 2017, providing the ability to quickly set up a visual</description>
      </item>
      <item>
         <title>End Alarm Fatigue with CA Application Performance Management</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/end-alarm-fatigue-with-ca-application-performance-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/end-alarm-fatigue-with-ca-application-performance-management</guid>
         <pubDate>December 5, 2017</pubDate>
         <description>We're all familiar with these axioms: Familiarity breeds complacency Ignorance is bliss What you don't know won't hurt you While this axiom is not as familiar as the ones above, it's just as important for us techies: Alarm fatigue (also known as alert fatigue) results from exposure to frequent alarms (alerts) and leads to desensitization, which causes longer response times and/or missing important alarms. Alarm fatigue also occurs in many other industries, including construction and mining (where backup alarms sound so frequently that they become senseless background noise) and healthcare (where monitors tracking vital signs sound alarms so frequently and for such minor reasons that they lose the urgency and attention-grabbing power they ought to have). It's as if Waldo was the only real alarm in a huge crowd of fake alarms. To use another analogy, it's like the little boy who cried wolf: False alarms rob real alarms of their value. Put an End to Alert Fatigue with CA Application Performance Management In application performance management, alarm/alert fatigue and desensitization present real dangers to people whose job is keeping apps running 24x7, meeting SLAs, and reaching financial targets. If monitoring teams get lazy or comfortable and start to ignore alarms, they could miss an important alert. In one of my previous blogs, &quot;APM Monitoring Governance: The Jurassic Park Conundrum,&quot;I discuss the disadvantages of monitoring too much &quot;stuff&quot; in an application. Application performance can be affected by the monitoring tool itself, so we need to place limits on the number of monitors, using key performance indicators as a guide in selecting monitors. Monitoring governance and alarm/alert fatigue are different fields, but they're more closely related than some may think. Too many metrics with no governance can also lead to an overabundance of alerts, many of which are ignored. Too many</description>
      </item>
      <item>
         <title>Maximizing Value with Public Cloud</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/maximizing-value-with-public-cloud</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/maximizing-value-with-public-cloud</guid>
         <pubDate>December 21, 2017</pubDate>
         <description>Public cloud services promise to deliver the agility that today's digital businesses demand. Virtually organizations of all sizes are consuming these services. Many organization are rapidly moving existing workloads to cloud and have a &quot;cloud first&quot; policy for new applications. But as organizations grow adoption of cloud based infrastructures or services, they need the right processes and tools in place to maximize its benefits.  

Advice from David Linthicum


Watch this short video with renowned industry thought leader David Linthicum as he shares his advice for maximizing value of public cloud within organizations. He highlights tracking SLA, value metrics, performance and capacity as key items.



To get upcoming videos with David please sign up here.

At CA we are continuously adding capabilities for monitoring and managing public cloud based infrastructures. Don't take my word for it, try cloud monitoring out yourself.
</description>
      </item>
      <item>
         <title>The Inside Story on Mainframe Security</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/the-inside-story-on-mainframe-security</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/the-inside-story-on-mainframe-security</guid>
         <pubDate>April 1, 2018</pubDate>
         <description>As we move into 2018, insiders continue to be a huge security threat to the enterprise. In fact, 77 percent of breaches in the privilege misuse category were the result of internal attacks, and there were 10,489 incidents of insider and privilege misuse in the last year alone. We all know how damaging these breaches can be, ranging from fines to negative headlines. But most importantly, 31 percent of consumers discontinued their relationship with a company that experienced a breach. Meaning, business outcomes depend on digital trust. The mainframe offers businesses a highly secure platform with pervasive encryption, as well as granular control over access permissions (which is important given that CA Data Content Discovery - Infographic when they leave). To date, there has been only one public breach involving the mainframe, however there have been thousands of breaches on other types of platforms. Why? Mainframes can protect data throughout the data life cycle, and provide different levels of isolation across the application and OS stack. This is simply not available to enterprise systems, with copies of corporate data stored in multiple locations. The key is leveraging the inherent power of data security on the mainframe. Digital Trust Is the Cornerstone of the Modern Economy You might be asking, &quot;How is digital trust relevant to the mainframe?&quot; For two important reasons. First, because most of an organization's sensitive and valuable data resides on the mainframe. And second, because the privileged credentials required to access this data today can often be shared amongst employees. While the mainframe is used to transact most of today's corporate data, the platform is increasingly interconnected to the rest of the data center, exposing more of its sensitive PII data €“ at the same time the insider threat landscape is evolving, and the financial incentives of</description>
      </item>
      <item>
         <title>Wi-Fi 6: A new frontier in wireless communications and connectivity</title>
         <link>https://www.broadcom.com/blog/wi-fi-6--a-new-frontier-in-wireless-communications-and-connectivity</link>
         <guid>https://www.broadcom.com/blog/wi-fi-6--a-new-frontier-in-wireless-communications-and-connectivity</guid>
         <pubDate>August 16, 2019</pubDate>
         <description>Wi-Fi is synonymous with fast wireless technology commonly found in everyday electronics, such as smartphones, laptops, home routers and gateways, wireless access points and TV set-top boxes. For the past two decades, Wi-Fi has enabled billions of people to connect to the internet, stream and enjoy high-quality multimedia content online, and upload and download large amounts of data on mobile devices. More importantly, Wi-Fi has revolutionized the way we access and consume digital information and made it easier for us to communicate and connect with the world around us. The digital world is becoming increasingly connected as more and more new devices have Wi-Fi built in. According to the Wi-Fi Alliance, Wi-Fi’s installed base has exceeded 13 billion units. This massive ecosystem of Wi-Fi-enabled devices is still expanding. It’s not just smartphones and laptops that need Wi-Fi connections. New classes of devices like wireless speakers, surveillance cameras, thermostats, refrigerators and a myriad of other smart appliances and machines are all connecting to the network through Wi-Fi. With the rapid growth of mobile data usage and fast expansion of IoT applications, existing wireless infrastructure operating on legacy Wi-Fi standards (802.11a/b/g/n/ac) are simply not sufficient to handle the increased connectivity and higher bandwidth demands. Based on the IEEE 802.11ax standard, Wi-Fi 6 is the latest generation of Wi-Fi designed to address the looming capacity crunch while creating a superhighway to support new and emerging multi-gigabit applications. Compared to Wi-Fi 5 (802.11ac), Wi-Fi 6 is better equipped to service a large number of Wi-Fi devices, especially in dense environments like city centers, malls, airports, concert halls and stadiums. In addition, the shift to Wi-Fi 6 includes significant improvements in data speed and latency for both uplink and downlink transmission. Wi-Fi 6 brings a host of feature enhancements, such as 1024-QAM and OFDMA, designed</description>
      </item>
      <item>
         <title>CA APM is back as a Gartner APM Magic Quadrant Leader!</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/ca-apm-is-back-as-a-gartner-apm-magic-quadrant-leader</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/ca-apm-is-back-as-a-gartner-apm-magic-quadrant-leader</guid>
         <pubDate>March 20, 2018</pubDate>
         <description>Gartner just published the latest Magic Quadrant for Application Performance Management (APM) suites with a new addition to the leaders' quadrant - CA Technologies! We are of course delighted at this recognition which we believe validates completeness of our vision and ability to execute in this very competitive market. What we think is even more noteworthy that CA is reclaiming the leadership status after several years, which is a reflection of the kind of dramatic transformation we are undergoing as a company. Building upon a technology (Wily) that is often credited with creating what is now a multi-billion-dollar modern APM market, we are very proud to offer our customers one of the best and most modern solutions in the market today that provides: Broad set of monitoring capabilities that can instrument the entire mobile-to-mainframe spectrum Industry leading scalability with real world deployments of 50,000+ agents collecting over 100 billion metrics per day Fine grain data collection to help cope with today's dynamic application environments More than 100 patented or patent-pending AI/ML capabilities covering topology-driven application modeling, automated root cause analysis, pattern recognition and anomaly detection, and more Choice of SaaS or on-prem delivery using a single code base Deep visibility for cloud-native and microservices applications. And tons of features and functionality This transformation is undoubtedly a result of incredible hard work put in by our engineering, product management, product marketing and the field organizations. However, no one deserves bigger credit than our customers who reminded us of the basics and helped us reappreciate the fundamental reason that they invest in APM and other monitoring tools. That is, all features and functionality aside, what they really want the answers a few simple questions, such as: Is the application delivering the desired user experience? Are the web pages loading within 3 seconds</description>
      </item>
      <item>
         <title>Flight School for Application Performance Aces - The Evolution of Application Monitoring</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/flight-school-for-application-performance-aces-the-evolution-of-application-monitoring</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/flight-school-for-application-performance-aces-the-evolution-of-application-monitoring</guid>
         <pubDate>November 27, 2017</pubDate>
         <description>In the early days of aviation pilots flew by the “seat of their pants”. With little in the way of instrumentation, they relied on sight and judgement. Fine when the skies were clear and conditions good, but not so great for flying in clouds, fog and things got, well, soupy. Stuck in the haze with no points of reference, pilots were in a space where nothing behaved normally. If they sensed the plane was descending, they pulled back on the yoke to gain altitude, only to find the plane diving steeper. Even when they believed the plane way was level, indicators suggested a sharp turn. Not surprisingly, the best course of action was often to bail out before the plane hit the ground. The Need for Comprehensive Instrumentation Gut feel and instinct isn’t great for flying planes in adverse conditions. This has something to do with the way pilots process information using visual and vestibular systems to figure out where they in space. As it turns out, fluid movements within the inner ear canal can play all sorts of tricks. What’s actually level flight might actually be processed as a steep turn, and any intuitive action to correct the situation just makes matters worse. And, without views of the horizon, pilots became so spatially disoriented that they lost complete control. With the birth of “instrument flight” this problem was averted. It meant supplementing the basic navigational devices most aircraft carried, with a cohesive set of instrumentation pilots could use to double-check what their very fallible senses were telling them. This included, artificial horizons combined with turn and bank indicators. But providing instruments alone wasn’t a total solution. Pilots still had to be become skilled in using them –to trust them. It’s why today’s pilots can only fly in the soup</description>
      </item>
      <item>
         <title>Managing Microservices and Containor Chaos - Application Monitoring</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/managing-microservices-and-containor-chaos-application-monitoring</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/managing-microservices-and-containor-chaos-application-monitoring</guid>
         <pubDate>September 7, 2017</pubDate>
         <description>The Butterfly Effect Picture an environment where the failure of an application component brings a service to its knees. No big deal we might say; it's familiar territory when managing monolithic applications and the €˜fail one, fail all' issues symptomatic of a single logical executable. But now imagine an environment where the failure of a component we didn't even know about and which might not even exist anymore brings your systems down. Now we're dealing with both the complex and the chaotic. Welcome to the world of container and microservice monitoring. Passing the Complexity Monkey Microservices and containers are great for developers because they remove the fragility, scale and deployment issue associated with tightly-coupled application architectures. By decomposing apps into smaller independent services and supported by cloud and continuous delivery, microservice architectures allow developers to crank out code much faster; never having to wait for lengthy system rebuilds, redeploys and integration tests, or sweat on whether that one line code change might have introduced a memory leak and brought the system down. So as a developer wouldn't you want to work with something that takes all your pain away? Of course you would, but there's a catch €“ that pain doesn't disappear, it moves elsewhere. With the shift from monolithic to microservice applications, other groups now have to support the &quot;complexity monkey&quot; and the whole new set of challengers it brings. And the new tech zookeepers, well €“ Site reliability engineers, DevOps practitioners and IT operations. New Levels of Complexity From an application monitoring perspective, microservices architecture that structures an application as collection of loosely coupled or distinct services introduces a whole new level of complexity. First up, these architectures naturally increase the proliferation of software instances due to the decomposition of monolithic applications - but that's only the start.</description>
      </item>
      <item>
         <title>CAST a Bright Light into the Application Black Box</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/cast-a-bright-light-into-the-application-black-box</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/cast-a-bright-light-into-the-application-black-box</guid>
         <pubDate>November 8, 2017</pubDate>
         <description>The mainframe stands at the center of your goals for digital modernization. The question is, do you perceive the mainframe to be an enabler that supports modernization, or a bottleneck that hinders it? Many CIOs view the mainframe as a bottleneck. Not because of the hardware or software itself, but because of a looming shortage of mainframe skills. A huge majority of CIOs – a full 71 percent – are concerned that this skill shortage will hurt their businesses. Specifically, CIOs are concerned about an increased application risk (58 percent), reduced productivity (58 percent) and more project overruns (53 percent).[1] There is good cause for these concerns, since a staggering 47 percent of developer time is spent simply digging into code.[2] Testing is a further time drain as coding errors, security flaws, and unrecognized dependencies send updates back to the drawing board. The result is that it can take eight to twelve weeks to have even a minor change pushed into the mainframe environment. Such delays wreak havoc on the desired speed to market for modernization – and, therefore, on a company's agility and competitiveness. Developers face layers of complexity The problem is that many mainframe applications have grown to resemble huge monoliths – complex systems requiring extensive reading of code, source level debugging, and lengthy exercises to understand application flow and interrelations. Fortunately, CA Technologies has partnered with CAST Software, a leader in the software quality and measurement space, to cast a bright light into this application &quot;black box&quot;. The pace of Digital Transformation will not tolerate a waste of valuable resources from developers sifting endlessly through complex code in the effort to bring modernization to the mainframe. Rather, developer mindshare and talent should be leveraged to create new features and functions that will change the market, improve the</description>
      </item>
      <item>
         <title>PPM 101: Project finances made easy</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/ppm-101-project-finances-made-easy-clarity-ppm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/ppm-101-project-finances-made-easy-clarity-ppm</guid>
         <pubDate>February 19, 2019</pubDate>
         <description>It's not easy running project finances, with all the requests for reports, limited insight into work and contending priorities in the portfolio. To better understand how Clarity PPM helps finance managers overcome day-to-day challenges, we reached out to Broadcom's very own project finance expert, Linda Chase. Linda uniquely combines experience with PPM software and finance and operations. She has been in the software development industry for the last fifteen years with a variety of senior level product management positions. Linda also brings over 10 years as a financial and operations officer for publishing and direct marketing companies in Colorado. We started our discussion by asking Linda to complete the sentence &quot;Clarity PPM improves the capability of financial managers by&quot;? &quot;Enabling planned costs to be correctly mapped to cost categories at the beginning of a project for both capital and operating expenses,&quot; said Linda. She went on to explain that &quot;aligning these costs at the beginning of a project ensures financial category accuracy throughout the life cycle of the project. The ability for project managers to see line item transaction costs during project cost plan analysis keeps the project manager and the finance manager speaking the same language in discovering problems early in a project cycle.&quot; This is something that many organizations forget when it comes to project delivery. These organizations expect project managers to deliver projects against a fixed, and often aggressive, budget but they do little to create an environment that makes it easier for that to happen. This is a key advantage of Clarity PPM: It is built to make that alignment easier, allowing organizations to engage finance and project managers easily without having to define the way the relationship works. We asked Linda to tell us why that was important to finance managers and her answer demonstrated</description>
      </item>
      <item>
         <title>Advanced Network Tools Visibility with Unified Alarm Views</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/advanced-network-tools-visibility-with-unified-alarm-views</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/advanced-network-tools-visibility-with-unified-alarm-views</guid>
         <pubDate>July 2, 2018</pubDate>
         <description>The latest release of CA Performance Management 3.6 delivers advancement in network tools integration and represents an important step towards a unified approach to network operations management. The network fault and performance foundation of CA's Network Operations and Analytics platform is powered by the CA Performance Management and CA Spectrum network tools. CA Performance Management lets you view and manage CA Spectrum alarms from the Performance Center portal using the Alarms View. Alarms lets you quickly focus on resolving your most impactful problems and provides visibility into other, potentially related, issues on the same device, or a connected device. In CA Spectrum, the viewing of alarms in OneClick is central to most operational workflows. OneClick alarm views enable you to identify the most impactful problems by presenting a prioritized list of alarms and facilitates network tools troubleshooting by correlating related alarms on the same device or neighboring devices. Figure 1: CA Spectrum alarms in the CA Performance Center portal. The Alarms View lets you view this same prioritized list of alarms in the CA Performance Center NetOps portal, eliminating the need to jump between multiple network tools. Usability Features CA Performance Management offers many customizable controls for the NetOps Alarms View to provide an effective and clutter-free UI. The Details section of the view provides a contemporary and efficient layout, where you can control which panels appear in the section including an Events panel. Figure 2: Details section of the Alarms View in the CA Performance Center portal The height of the alarm grid can be customized which controls the number of visible alarms. In addition, the alarm grid offers an expanded list of over 20 attributes to choose from when adding or removing columns or when sorting. Figure 3: Attributes for adding and removing columns and sorting These features</description>
      </item>
      <item>
         <title>BroadR-Reach Ethernet: Enterprise-Level Security for Connected Cars</title>
         <link>https://www.broadcom.com/blog/broadr-reach-ethernet-enterprise-level-security-for-connected-cars</link>
         <guid>https://www.broadcom.com/blog/broadr-reach-ethernet-enterprise-level-security-for-connected-cars</guid>
         <pubDate>August 22, 2014</pubDate>
         <description>Its a scene straight out of a spy or sci-fi movie: A malicious figure sits in front of a glowing computer monitor in a cave-like room, typing away.The next moment, an unsuspecting victims high-tech car miles away careens out of control.The radio blasts at full volume, the steering wheel jerks and the brakes fail.This car has been hacked, and its now fully under the villains control. [caption id=&quot;attachment_6184&quot; align=&quot;alignright&quot; width=&quot;312&quot;] Click to expand: Learn more about Ethernet in the car.[/caption] A futuristic tableau like this is terrifying, but fortunately, the threat of cars being taken over by hackers is more a product of Hollywood imagination than reality. Thats because todays connected car isnt as connected as it seems: The powertrain, telematics, safety, and infotainment systems are isolated and typically use various legacy networking technologies that operate independently. The overall systems in the car are very disjointed today, said Tim Lau, director of automotive connectivity in the Infrastructure and Networking Group at Broadcom.They dont intercommunicate together, so its very difficult to hack into one portion of the car and be able to access the entire vehicle. Todays array of in-vehicle technologies fall short of the advanced networking capabilities needed for a truly connected car.Thats why developers have been clamoring for a faster, scalable, flexible, cost-effective networking protocol.Most importantly, they want a solution that can offer fail-safe protections against malfunctions and malicious cyber-attacks from would-be hackers. Broadcom's in-car connectivity technology, called BroadR-Reach, meets that need. The global standard of Ethernet which has been around for nearly four decades has a long track record of secure deployment in dynamic, plug-and-play technology environments. Ethernets proven security features have an added advantage in automotive applications: The devices and configurations of in-car networks are known and predictable, so identifying and protecting against threats can be a</description>
      </item>
      <item>
         <title>Lessons Learned From Container Performance Tuning</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/lessons-learned-from-container-performance-tuning</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/lessons-learned-from-container-performance-tuning</guid>
         <pubDate>March 5, 2018</pubDate>
         <description>When I started using containers to deploy software, I did what everyone else was doing: I launched a container running Alpine Linux, logged into a Bash shell running inside the container, and said, &quot;That was easy!&quot; Of course, that first container experience did not do much to solve any real-world problems. It was not until I took the next step-migrating an existing application that I was working on into a container-that I got a taste of the challenges that arise when using containers. This article highlights some of the monitoring and troubleshooting challenges that you're likely to face when you use containers, based on my experience with a containerized Spring Boot app at my company and deploying it using Docker. Starting Up Getting started with the Spring Boot app was easy. I was able to pull a template for the Dockerfile I needed from some blog posts, then tweak it to get things just right before launching my container. When I started the container, all went smoothly. It fired up without issue, and I was able to connect to the app from my test client. So far, so good. Now, let's take a look at the log files. Log Files Pulling the logs from the container runtime is easy enough. You first pull the list of containers running, then list the logs for the appropriate container, like so: [root@origin helloworld-springboot]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 6523e9917c11 springboot/helloworld &quot;java -Djava.security&quot; 3 minutes ago Up 3 minutes 0.0.0.0:8080-&gt;8080/tcp romantic_jepsen [root@origin helloworld-springboot]# docker logs --tail 5 6523e9917c11 2018-01-21 03:47:05.458 INFO 1 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http) 2018-01-21 03:47:05.466 INFO 1 --- [ main] c.e.j.g.HelloworldApplication : Started HelloworldApplication in 4.379 seconds (JVM running for 5.618) 2018-01-21 03:47:12.134 INFO 1 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] :</description>
      </item>
      <item>
         <title>Simplifying Modern Network Monitoring, From Your Office to Timbuktu</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/simplifying-modern-network-monitoring-from-your-office-to-timbuktu</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/simplifying-modern-network-monitoring-from-your-office-to-timbuktu</guid>
         <pubDate>February 22, 2018</pubDate>
         <description>CA and Netrounds provide modern network monitoring for predictive network behavior. When Shamus McGillicuddy, Senior Analyst at Enterprise Management Research, published the 2017 EMA Innovator Report for network monitoring late last year, there were two statistics that caught my eye: Firstly, on average, 71% of a Network Operations Manager's day is spent fixing network problems, through either reactive troubleshooting (&quot;firefighting&quot;) or proactive problem prevention. This alarming statistic leaves little time for your operations team to spend on projects that are delivering real value to your business. Secondly, the number one challenge that operations team face today is the &quot;lack of end-to-end network visibility&quot;, largely due to a fragmented toolset. Although we realize that &quot;One tool to rule them all&quot; will never exist, the fact that 24% of network engineers use 6-10 monitoring and troubleshooting tools on the daily and 34% use more than 11 is just alarming! So, how can we help to remedy this situation? As a partner of the CA NetOps/SDN Ecosystem, specifically working with the Network Operations and Analytics from CA, Netrounds provides active testing and monitoring integrated into the CA solution. When used with Netrounds, Network Monitoring software from CA can validate the dynamic creation and changes of network services in automated SDx and public cloud environments. Together with live full stack network monitoring and added synthetic insights, this powerful integrated approach actively tests and monitors pre-production and production deployments for predictive network behavior and validation to help optimize the customer experience. Instrumental in reducing the large number of tools that many IT organizations struggle to handle, the combination of CA and Netrounds allows you to combine traditional performance management and monitoring, fault management, and active testing and monitoring into one view. In addition, the Netrounds active solution covers a wide spectrum of network KPIs and</description>
      </item>
      <item>
         <title>What's New in APM - CA Application Monitoring Enhancements</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/what-s-new-in-apm-ca-application-monitoring-enhancements</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/what-s-new-in-apm-ca-application-monitoring-enhancements</guid>
         <pubDate>September 26, 2017</pubDate>
         <description>Over the past 12 months, CA has released several new Application Performance Monitoring features that deliver market leading innovation balanced with meeting the needs of our enterprise customers. Below are a few of the latest innovations delivered across CA Application Performance Management, CA App Experience Analytics and CA App Synthetic Monitor as well as a list of releases. Experience View €“ Business-focused view of what you are delivering to your customers and provides immediate answers the key question with minimal work: &quot;What is the customer experience?&quot;. The experience view provides a summary of health across the entire application environment, displays the impact to the customer experience and provides users with any easy way to visualize problems and rising issues that could eventually cause performance issues. Assisted Triage &amp; Analysis Notebook €“ Provides immediate answers to the question &quot;why is the experience poor?&quot; without all the manual digging around. Assisted triage uses the underlying analytics of perspectives, timeline, differential analysis and combines this with our own knowledge of managing large enterprise applications to provide guided workflows for speeding application triage. Problems and Anomalies organize and merge their evidence into folders, giving the overall look-and-feel extra polish and utility. It presents all the data related to an issue in a single view allowing even the novice user to easily determine the root of an issue. Surfaces differential analysis capabilities, for algorithmic and predictive analytics, in a more visible and usable way. Zero-Configuration Agent €“ Automatic backend detection removes the need for custom configuration. It provides patent-pending technology that tracks start and end points along with everything in the middle of the trace. It will add in extensions based on what it finds, for example if it finds calls to a MongoDB it will add in monitoring for that component. Extensions Marketplace €“</description>
      </item>
      <item>
         <title>Why A Converged Network Monitoring Experience is So Important</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/why-a-converged-network-monitoring-experience-is-so-important</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/why-a-converged-network-monitoring-experience-is-so-important</guid>
         <pubDate>May 6, 2018</pubDate>
         <description>Suddenly, the network is cool again. Tech trends such as the Internet of Things, software defined networking (SDN) and growing end-user expectations all add up to a demand for &quot;dial-tone&quot; network performance and reliability. But with the data deluge continuing to accelerate, and organizations relying on multiple clouds to achieve business goals, how can the network keep up, much less your network monitoring? Because of rapid advances in networking technology coupled with user demands, network managers often find themselves with way too many tools, each designed to manage or monitor a single aspect of the enterprise network and application performance. Today, half of enterprises find themselves using 11 or more network monitoring tools and for many, this may be just the beginning. For example, as organizations continue the adoption of software-defined WAN (SD-WAN), with the goal of replacing older MPLS connections with modern broadband internet, they find their existing network monitoring tools aren't efficient and over two-thirds of SD-WAN adopters have added yet another tool or thrown up their hands and outsourced management to their network service providers. The problems created by relying on swivel-chair management-where data is entered into one system and then entered manually into another system-are many. First, there is the duplication of effort involved in managing multiple tools and interfaces. Then there is the uncertainty brought on by having to rely on a variety of network monitoring tools for various infrastructure components. Just keeping track of which tool is used for which network element can induce stress. Figure 1: Percentage of total network operations work time spent. As a result, network managers now spend over 70% of a typical workday troubleshooting, according to a recent Enterprise Management Associates report. Split almost evenly between problem prevention and reactive firefighting, busy network managers are able to devote only</description>
      </item>
      <item>
         <title>Part 3: Five Principles to Supercharge Continuous Delivery - Rally Software®</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/part-3-five-principles-to-supercharge-continuous-delivery-rally-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/part-3-five-principles-to-supercharge-continuous-delivery-rally-software</guid>
         <pubDate>April 18, 2018</pubDate>
         <description>The first post in this series explored why Continuous Delivery is critical to making a great product. The second post took a deep dive into a single change delivered via Continuous Delivery to provide an example of what Continuous Delivery might look like in action. This post explores practices that help to supercharge a Continuous Delivery process. While specific Continuous Delivery implementations may vary from company to company, there are a few generally applicable principles for effective Continuous Delivery. So what are some guiding principles and what are some examples of practices that support those principles? Principle 1: Make Small, Frequent Changes The longer work sits in a queue such as waiting for a release, there's waste between the cost of the change (ex. design, development, testing) and not yet validating it's valuable. Meanwhile, you might be making decisions based on assumption that will be invalidated once you release the change. Generally the larger the change, the longer it will likely take to make it through your delivery process and therefore creates more risk. Splitting changes into the smallest potentially valuable increment helps to speed up changes. For example, perhaps instead of implementing a new way of filtering every page in your product, you could implement a filter on one page with a single type of filter and test it with a few users. Perhaps instead of a few weeks you could get feedback in a few hours or days, doubling down on your current approach or iterating to improve it. So what practices are helpful to making small, frequent changes? Purposefully Planned Small Changes What's the smallest change you can learn from and test with users? Do that. A good rule is to try to size changes so they only take a few days each. From time to time,</description>
      </item>
      <item>
         <title>Why You Need Application Performance Management</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/why-you-need-application-performance-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/why-you-need-application-performance-management</guid>
         <pubDate>December 11, 2017</pubDate>
         <description>If You're Not Gauging Performance, You're Not Improving Kent Beck is famous for saying, &quot;Make it work, make it right, make it fast, make it small&quot;. If you haven't been indoctrinated, Kent is one of the founding fathers of Agile, and the gist of his advice is that you don't try to optimize before something is built properly, and you don't worry about building something properly when you're first getting it to work. Engineers are often &quot;chess players&quot;, and want to think many moves ahead to a perfect design for a new problem. Kent's advice is that you won't know what that looks like until you actually begin. If you try to make it fast immediately, you're likely wasting your time. Henrique Bastos points out, and I agree, that these stages ought to be part of a single cycle of development, part of the Definition of Done for each release. Putting Beck-ian Principles in Practice Let's imagine a shop with the goal of putting a RESTful layer of web services on top of a company's product inventory. In the past, the business sold directly to customers. Leadership did some analytics and determined that, if the business engaged resellers, then they could increase sales with little additional overhead. The large constant factor is the transition from legacy relational databases fronted by first generation web applications, to NoSQL databases that can index and search product inventory at lightning speed. While this is far from the road not taken, it's a large enough job to call for some best practices. Following Kent's advice, our developers design their REST APIs, with input from resellers. In this process, they discover that the resellers would like a sample code base showing how to do authentication and consume the output. The developers decide to make the sample</description>
      </item>
      <item>
         <title>Ensuring Your SAP Infrastructure Runs At Lightening Speed</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/ensuring-your-sap-infrastructure-runs-at-lightening-speed</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/ensuring-your-sap-infrastructure-runs-at-lightening-speed</guid>
         <pubDate>January 31, 2018</pubDate>
         <description>SAP is at the heart of many organizations' most mission-critical business processes. If SAP systems, databases and supporting IT infrastructure are not functioning optimally, critical business functions-such as order-to-cash or financial close-can slow down or even become completely unavailable. SAP ERP based on R/3 architecture is a three layers architecture as the name suggests. To make sure end user experience is top notch, all the three layers needs to function optimally in tandem. (1) Presentation Layer This Layer contains the software components that make up the SAP GUI (graphical user interface). This layer is the interface between the R/3 System and its users. The R/3 System uses the SAP GUI to provide an intuitive graphical user interface for entering and displaying data. SAP GUI can be both thick client or the modern web based interface. The presentation layer sends the user's input to the application server, and receives data for display from it. While a SAP GUI component is running, it remains linked to a user's terminal session in the R/3 System. (2) Application Layer This Layer consists of one or more application servers and a message server. Each application server contains a set of services used to run the R/3 System. Theoretically, you only need one application server to run an R/3 System. In practice, the services are distributed across more than one application server. The message server is responsible for communication between the application servers. It passes requests from one application server to another within the system. Also contains information about application server groups and the current load balancing within them. Lastly, it uses this information to assign an appropriate server when a user logs onto the system. (3) Database Layer This Layer consists of a central database system containing all of the data in the R/3 System.</description>
      </item>
      <item>
         <title>Wi-Fi CERTIFIED 6 has officially arrived</title>
         <link>https://www.broadcom.com/blog/wi-fi-certified-6-has-arrived</link>
         <guid>https://www.broadcom.com/blog/wi-fi-certified-6-has-arrived</guid>
         <pubDate>September 16, 2019</pubDate>
         <description>Today, the Wi-Fi Alliance launched the Wi-Fi CERTIFIED 6 program, supporting next-generation Wi-Fi products and devices based on the IEEE 802.11ax standard. This is exciting news for the industry and for consumers. Wi-Fi CERTIFIED 6 devices provide the fastest, most reliable, most efficient Wi-Fi to date and, at Broadcom, we’re thrilled to be a leader in Wi-Fi 6. The Wi-Fi 6 difference Wi-Fi 6 was designed for today’s connected world. Wi-Fi CERTIFIED 6 devices offer consumers lower latency, better battery life and as-yet-unseen throughputs. With innovations like uplink and downlink OFDMA, MU-MIMO, target wake time (TWT) and 160 MHz channel capabilities, Wi-Fi CERTIFIED 6 devices create a next-generation connected experience. This ecosystem provides greater network capacity that can support high performance even in congested spaces like stadiums or airports. It enables the high speeds and efficient connectivity consumers crave and will be key to supporting 5G services. Broadcom — A Wi-Fi 6 leader More than six months ago, Broadcom brought first-of-its-kind Wi-Fi 6 powered devices to market. Today, I voiced my support of today’s milestone announcement on behalf of the company: “Broadcom is thrilled to have three of our best-in-class devices included in the certification testbed for today’s official launch of Wi-Fi CERTIFIED 6 — the BCM4375, BCM43698, and BCM43684. These Broadcom devices already power tens of millions of Samsung Galaxy phones and routers around the world. Capable of supporting up to 160 MHz wide channels, Wi-Fi CERTIFIED 6 devices offer consumers lower latency, better battery life and as-yet-unseen throughputs, all of which are critical for 5G services. As the full 6 GHz band is made available for unlicensed use — with multiple 160 MHz-wide channels — the Wi-Fi 6 consumer experience will be turbocharged for the gigabit home and AR/VR.” Powering the first Wi-Fi CERTIFIED 6 phones Broadcom partnered</description>
      </item>
      <item>
         <title>PODCAST: NetOps for MSPs Discussions with Francois Cattoen, Broadcom Product Management</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/podcast-netops-for-msps-discussions-with-francois-cattoen-broadcom-product-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/podcast-netops-for-msps-discussions-with-francois-cattoen-broadcom-product-management</guid>
         <pubDate>May 22, 2019</pubDate>
         <description>



 




Francois has spent 15 Years in the IT industry covering different roles (senior developer, Technical sales and product management). He has a Masters degree in IT with a specialty in distributed systems. Located in Boston, he has been with CA Technologies for seven years and before that was with Nlyte Software and Fujitsu. He speaks French, English and German and is married with one child. https://www.linkedin.com/in/francoiscattoen/

 
</description>
      </item>
      <item>
         <title>11 Steps to Having Difficult Conversations, Successfully</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/11-steps-to-having-difficult-conversations-successfully-rally-software-formerly-ca-agile-central</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/11-steps-to-having-difficult-conversations-successfully-rally-software-formerly-ca-agile-central</guid>
         <pubDate>April 11, 2018</pubDate>
         <description>Every day in the hyper-collaborative R&amp;D organization for Rally, we face difficult conversations with opposing opinions, high emotions and high stakes. Every time they occur we are faced with the opportunity to avoid the situation or step into it. However difficult, the best thing to do is to tackle the conversation in a way that will create better outcomes for you, your team, the product, and ultimately, the customer. But stepping in isn't always enough. With high emotions our bodies often react in fear, causing us to come out defensive or angry, creating an unsafe environment that is not conducive to successful outcomes. To combat our fight or flight instincts, we decided it was important to have a common framework for success we could practice across our organization. We hope by sharing it with you, that you will bring it into your organization, and maybe share some of your tips and tricks with us. 1. Identify issues and write them down. Don't script your introduction or discussion, but jot down some notes about what is really bothering you. If you write down issues vaguely like &quot;you're always late&quot; or &quot;you never follow the schedule&quot;, the other party will immediately jump on the defensive with examples of every time your statement was false. Instead, write down how it makes you feel and how it affects you or the team. Example: A developer is regularly 20 minutes late. Does it bother you because you feel your time is not valued? Because your commitments are being compromised? Because you feel they are not being held to the same standard as you? Be as specific as you can. 2. Ask yourself several questions. What is the purpose of this conversation? What do you hope to accomplish? What is your ideal outcome? The conversation is</description>
      </item>
      <item>
         <title>Comprehensive Citrix Monitoring That Your End Users Would Love - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/comprehensive-citrix-monitoring-that-your-end-users-would-love-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/comprehensive-citrix-monitoring-that-your-end-users-would-love-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>December 29, 2017</pubDate>
         <description>The latest generation of Citrix XenApp and XenDesktop delivers apps, data and desktops on a unified platform, allowing organizations to define and scale their virtual infrastructure in more ways, quickly, easily and economically. CA Unified Infrastructure management (CA UIM) complements this unified solution with comprehensive, integrated citrix monitoring for every part of the environment. Customers can speed to isolation for user issues, monitor infrastructure performance, user experience, and gain insight into current trends to plan for the future. Citrix administrator wants to see a concise view for XenDesktop deployment in his organization. His goal is to keep Citrix XenApp and XenDesktop environment running at peak performance and experience. Citrix® deployment includes multiple interdependent tiers. This makes pinpointing the source of performance issues challenging. Poor performance and slowdowns of Citrix XenApp impact both internal and external users . Citrix administrator’s biggest pain area is that there is no clear visibility of the overall Citrix 7.x deployment. Any admin would love to have a tool which can help him seethe overall state Citrix XenDesktop/XenApp deployment rapidly and enable him for faster issue detection and isolation. To give some context, XenApp 7.6 and XenDesktop 7.6 are based on FlexCast Management Architecture (FMA). FMA is a service-oriented architecture that allows interoperability and management modularity across Citrix technologies. FMA provides a platform for application delivery, mobility, services, flexible provisioning, and cloud management. FMA replaces the Independent Management Architecture (IMA) used in XenApp 6.5 and previous versions. With the release of XenApp and XenDesktop 7, these products have been expanded and combined into a single architecture, so to align itself, CA UIM’s monitoring XenDesktop probe has also been combined to support monitoring of both XenApp and XenDesktop 7.x and beyond. For XenApp version prior to 6.x, CA UIM has separate XenApp probe. Overview And Data Collection</description>
      </item>
      <item>
         <title>Cisco ACI Demands Advanced Network Monitoring and Analytics</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/cisco-aci-demands-advanced-network-monitoring-and-analytics</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/cisco-aci-demands-advanced-network-monitoring-and-analytics</guid>
         <pubDate>May 6, 2018</pubDate>
         <description>Today's networks are changing. A lot of IT and NetOps budget is being spent on software defined architectures and Cisco ACI (Application Centric Infrastructure) and plays a big part in all of this. More than 3,500 customers have over 12,000 Nexus 9000 switches already installed in today's marketplace. Software defined networking (SDN) is taking hold and these numbers prove it. Cisco ACI is really about automation and programmability in the data center. Removing all the manual steps it takes to run a datacenter network along with centralization of the configuration of the network and abstraction that makes programming the network easier. Finally, it delivers infrastructure that is agile enough to allow the application experience to flow freely through the data center and to the end user. Applications are the identity of today's data center and €˜application profiles' is how Cisco ACI was designed to configure the network €“ for optimal application experiences. Cisco ACI presents network operations challenges The Cisco APIC GUI (Element Management System) is great for the network engineers designing and deploying the network but doesn't offer network operations any advantages to troubleshooting the network. It lacks monitoring scale required by SDN along with easy operational troubleshooting workflows and triage scenarios. Cisco ACI abstracts the physical network into virtual and logical layers and entities. Which means a lot more devices and interfaces on the network then there ever has been before. Moving around and sucking up data center resources with the click of a mouse. Cisco ACI centralization and abstraction can also mean a lot more noise on the network. This technology has 23,000 events defined in it with hundreds of unique messages and alarms. This many events and faults can flood your network and an operations team's ability to troubleshoot efficiently. If you can't get ahead of</description>
      </item>
      <item>
         <title>Conversational Cloud Monitoring - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/conversational-cloud-monitoring-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/conversational-cloud-monitoring-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>December 26, 2017</pubDate>
         <description>Hi, I'm Brien. I've been writing for Conversational Geek for a long time, and have covered a wide variety of topics. When I say &quot;wide variety&quot; I really mean it. I have written Conversational Geek books on everything from Amazon Web Services to rocket science! With so many titles under by belt, you might be wondering why I chose to write about for my latest Conversational Geek book.

No, it’s not because rocket science  isn’t difficult enough, and I needed a real challenge. The main  reason why I wanted to write about cloud monitoring is  because the cloud is a game changer. Monitoring tools have been around for decades, and yet existing tools tend to be  inadequate for monitoring backend infrastructure in cloud  environments. These tools can also be fairly difficult to use in  hybrid environments. And since businesses are already heading  the hybrid route, I wanted to take the opportunity to write  about what is really needed for effective cloud monitoring.

Oh, and one more thing… Although this book will eventually  have a sponsor (we can’t print these books for free), it is not  intended to be a vendor product pitch. My goal here is to take  a vendor neutral approach, and talk about strategies for  effective cloud monitoring.

Please download a free copy of the book here




     Conversational Cloud Monitoring
			



This is a guest post by Brien Posey. Brien is a leading cloud expert with over two decades of experience in IT. He is an internationally published author and conference speaker. You can follow him at on Twitter

 
</description>
      </item>
      <item>
         <title>Instant Data Analysis with Kafka Streams - Rally Software®</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/instant-data-analysis-with-kafka-streams-rally-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/instant-data-analysis-with-kafka-streams-rally-software</guid>
         <pubDate>January 16, 2018</pubDate>
         <description>The Streaming-Summary Problem At 5 p.m., a business closes its doors and the owner wants to tally up purchases for the day. It's easy, if tedious, to go through the stack of receipts (data) and calculate totals, averages, and variance for purchases that day (or any past day). What if the business is open 24 hours a day and the owner wants to continuously produce descriptions of purchases for each hour? To make it interesting, assume receipts can show up late by minutes, weeks, or years. And sometimes customers submit updates to purchases they completed months ago. And the owner wants to calculate order statistics. And the customers don't just have one receipt, they each have a receipt for errors, a receipt for database usage, and a receipt for CPU time. And there are a few thousand customers per second. Some of the reporting that we do on usage of Agile Central is like the former scenario; we can take as long as we want to run calculations and it's safe to assume we have all the data. But we also produce streaming summaries of usage over ten-minute windows, which allows us to quickly detect changes to performance that can't wait until the end of the day. All of the data we want to summarize is produced by application servers onto Kafka (a distributed log that we use to transmit messages). In the past, we've used Apache Samza for streaming work, but we've found it to be resource intensive and opaque. It was easy to lose days beating your head against configuration, only to discover an obvious exception was disappearing into the ether. A First Pass We're fairly early adopters of Kafka Streams, and we've definitely had some growing pains. Early on, minor version changes seemed to break everything. The</description>
      </item>
      <item>
         <title>It's about products, not projects</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/ppm-pundits-it-s-about-product-not-project-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/ppm-pundits-it-s-about-product-not-project-management</guid>
         <pubDate>April 4, 2019</pubDate>
         <description>Product portfolios is the talk of the town. While the concept isn't new, increasing attention is paid to how we manage product portfolios. But before we take a closer look, let's consider why product portfolios management is important to the entire company. We start with a definition of the product portfolio. At the simplest level it is nothing more than an inventory of all the products and services offered by an organization. However, to be useful, it must be a little more involved than that. First, we need to ensure we have truly identified every offering, not just the main ones. Any organization will have those almost forgotten niche products that only one or two people know about and that only have a handful of customers using them, they must be included just as much as the main offering. Then, we need to ensure all markets are considered - market segment, geography, etc. And we need to include subsidiary or related companies in those numbers, along with any add on modules offered through in house or partner professional services. We also need to understand our internal products and services - those things we have developed for use by our employees - they are still part of our product portfolio. It's a complex picture of the various elements or versions of the product portfolios that exist in the organization, and before we can manage that environment effectively we need to be able to develop a consolidated, integrated view - a single product portfolio for the entire organization. Without that we are &quot;managing blind&quot; adjusting one set of products without understanding the impact on other areas, and that hurts both effectiveness and efficiency. It also exposes the organization to unnecessary risk. Organizations must therefore develop a single, integrated portfolio of all the</description>
      </item>
      <item>
         <title>Network Monitoring Doesn't Mean Always Sniffing Packets</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/network-monitoring-doesn-t-mean-always-sniffing-packets</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/network-monitoring-doesn-t-mean-always-sniffing-packets</guid>
         <pubDate>February 27, 2018</pubDate>
         <description>Protocol analysis remains one of the most rewarding yet challenging aspects of troubleshooting applications, networks and server performance even after all these years. But for network monitoring, protocol analysis is much more than merely sniffing packets. First there is the challenge of collection through SPAN/RSPAN, ERSPAN, tap, physical, virtual, software-defined…all of these technologies present their own challenges in simply acquiring the data in the first place. Once the data is acquired, then it needs to be examined. This requires knowledge of packets, protocols, and communication processes. The challenge of acquiring data is just the beginning Most large organizations are fortunate if they have two packet gurus on staff while most don't have any. Fifteen or twenty years ago, deep packet analysis was the way to solve complex performance issues and it still is today. However, it is very time consuming, taking hours, days and sometimes weeks to collect and examine the right data to narrow down any performance issues. To that point, it is highly recommended to leverage metrics. Network monitoring that takes packet data and records it as critical metrics is invaluable when doing rapid triage. Additionally, it is easier to train employees to look at metrics than it is to teach protocol analysis. Once you have the metrics, incorporating those performance metrics into an overall workflow that combines packet data, flow data, SNMP and non-SNMP information is what modern network triage is all about. A good network monitoring software workflow not only allows for faster Mean Time to Information (MTTI) but can also be developed into Standard Operating Procedures (SOP). Once an SOP is defined, it lends to a workable training plan that can be repeated successfully. Figure 1: Troubleshooting performance issues one packet at a time is not only time consuming, but requires a great deal of</description>
      </item>
      <item>
         <title>AI-Driven IT Operations – Secrets to Success Beyond Great Math</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/ai-driven-it-operations-secrets-to-success-beyond-great-math</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/ai-driven-it-operations-secrets-to-success-beyond-great-math</guid>
         <pubDate>June 1, 2018</pubDate>
         <description>Once upon a time we had visibility across IT Infrastructure. We had physical data centers and lovingly nurtured our servers and networks. Of course, the applications under our control became increasingly complicated, but we could always get under the hood when things went wrong. But consider this. Due to all-things cloud, most folks entering the tech workforce today will never get to see a physical server or play with a patch panel and configure a router. They’ll never need to acquire that sysadmin “sixth-sense” knowledge of what’s needed to keep the systems up and running. So, what’s needed to fill the void? Well, two things — data and analytics. There’s no shortage of data or Big Data in IT operations. Acquire a new cloud service or dip your toes into serverless computing and IoT and you get even more data — more sensor data, logs and metrics to supplement the existing overabundance of application component maps, clickstreams and capacity information. But what’s missing from this glut of data are analytics and AI-driven IT operations (AIOps for short). It’s tragic that organizations rich in recorded information lack the ability to derive knowledge and insights from this information. Kind of like owning the highest grade gold bearing ore but not having the tools to extract it – or worse, not even realizing you have the gold at all. Most organizations understand there’s “gold in them there thar hills” and are employing methods to mine it. In the last few years, we’ve seen fantastic strides in data gathering and instrumentation, with many new monitoring tools appearing almost as fast as each new tech and data source. So, as organizations sign up for a new cloud service there always seems to be another monitoring widget or opensource dashboard to go with it — along</description>
      </item>
      <item>
         <title>What's New in CA APM 10.7 - Discover the Latest Enhancements</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/what-s-new-in-ca-apm-10-7-discover-the-latest-enhancements</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/what-s-new-in-ca-apm-10-7-discover-the-latest-enhancements</guid>
         <pubDate>March 1, 2018</pubDate>
         <description>Today, we are pleased to announce that CA Application Performance Management (CA APM) r10.7 is now available. Among many great improvements, this release is strongly focused on cloud and container monitoring and application to infrastructure monitoring and correlation. Cloud and Container Monitoring Organizations are embracing Docker Containers to speed development, but a lack of performance visibility across more complex application architectures can compromise this goal. What's now needed are modern monitoring approaches that natively support Docker Containers, Kubernetes and cloud environments and don't overburden teams with lengthy configurations and unnecessary overhead. CA APM supports a variety or cloud and container environments with a low-touch, maximum visibility approach, including automatic flow and dependency mapping, adaptive baselining, and performance correlation across hosts, containers and applications – in the most complex and demanding distributed microservices architectures. In CA APM you can easily view container, host, application and underlying infrastructure services in one place where all metrics and transactions are correlated across the stack to provide detailed insights, dependencies and analysis giving you the context you need to understand these complex environments. You can easily switch between application and infrastructure views to better understand the service health. Information collected are then correlated and analyzed as evidence in Assisted Triage to help reduce the noise and get to the real root cause of an issue quickly. New in CA APM 10.7 for cloud and container monitoring include: OpenShift Monitoring – monitors performance, correlates application components to OpenShift-aware infrastructure layers. An container image of the monitoring service can be downloaded from the Red Hat Container Catalog. Kubernetes Monitoring – monitors performance, correlates application components to Kubernetes-aware infrastructure layer Enhanced Docker Monitoring – simplifies deployment of monitoring, correlates application components to Docker-aware infrastructure layer Enhanced VMware Monitoring – monitors VM and physical performance, correlates application components to</description>
      </item>
      <item>
         <title>How to Prevent Website Outages with Synthetic Monitoring</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/how-to-prevent-website-outages-with-synthetic-monitoring</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/how-to-prevent-website-outages-with-synthetic-monitoring</guid>
         <pubDate>January 4, 2018</pubDate>
         <description>Now that 2017 has come to an end, it's time to look back and reflect on some of the biggest website outages of the year and most importantly, how to prevent similar failures in the new year. When a site goes down, revenue is lost and your customer experience suffers. So let's take a look at some examples from last year that likely could have been prevented with synthetic monitoring tools. Amazon Web Services It's no secret that a majority of companies rely on AWS to run their online business. So if they go down, the effects are felt by many. In February, AWS suffered a four-hour outage which in turn caused 54 of the top 100 internet retailers to suffer a decrease of 20% or greater in performance with some sites going down completely. The worst part about this one, if you were one of those effected by this outage, you may not have known until it was too late. Lowe's Your site crashing on major retail holidays like Black Friday or Cyber Monday means tons of money lost. And this was no different for major home improvement retailer, Lowe's. Their site went down the morning of Black Friday as thousands of customers were trying to shop the sale. While they did get the site back up and running, there's no doubt that they lost sales. Southwest Airlines Like major retailers, airlines also offer discounted prices on Black Friday/ Cyber Monday and this Cyber Monday didn't go as planned for Southwest Airlines. Their site went down for about an hour €“ serving up customers trying to book travel an unexpected error message. J. Crew J. Crew was another retailer who suffered a Cyber Monday outage. The major deals being offered caused the high amounts of traffic to crash the</description>
      </item>
      <item>
         <title>Why Synthetic Monitoring is Essential to Provide a Flawless Experience</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/why-synthetic-monitoring-is-essential-to-provide-a-flawless-experience</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/why-synthetic-monitoring-is-essential-to-provide-a-flawless-experience</guid>
         <pubDate>December 15, 2017</pubDate>
         <description>E-Commerce Sales Are on the Rise - Is Your Website Ready? In the digital age, the idea of shopping online from the convenience of your home has become more appealing to the masses €“ with US online sales growing 15.6% in 2016. Forcing traditional retailers to morph into e-commerce businesses and compete with the likes of Amazon and Overstock just to keep their customers happy. However, this shift brings about many new challenges in terms of monitoring and performance €“ and failure to provide a flawless user experience can leave a lasting (negative) impression. Because when your website becomes your business, it's important that it's working properly. Downtime = Unhappy Customers = Lost Revenue Today's online shoppers have high expectations. With no tolerance for slow load times and outages, a poor online experience can ruin your brand. For example, recently Lululemon, a Canadian-based athletic apparel retailer focused on expanding its e-commerce business, suffered a major online outage, causing them not only lost sales, but frustrated customers and a hit to their reputation. And it's not just smaller retailers who face these challenges. Even e-commerce giants like Amazon can fall victim; suffering from an outage of about 40 minutes cost them losses of nearly $4.8 million dollars a few years back €“ showing you that time really is money. The Need for a Synthetic Monitoring Solution In order to avoid costly outages, having the right monitoring solutions in place is essential. While having some form of Real User Monitoring (RUM) is obvious, one major flaw is that you only know you have a problem when a real-user has experienced it. Queue synthetics. By employing a synthetic monitoring solution, you can be assured that your site is monitored 24x7 regardless of whether you have real user traffic. Should anything go wrong, you'd</description>
      </item>
      <item>
         <title>Code smarter, not harder</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/code-smarter-not-harder-clarity-ppm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/code-smarter-not-harder-clarity-ppm</guid>
         <pubDate>May 15, 2019</pubDate>
         <description>We are coding software faster than ever before. A recent survey showed 85 percent of developers use Agile methodologies. Another 75 percent of coders expect new coworkers to be productive in three months, with a third saying it should take less than 90 days. Continuous delivery is thus becoming more real every day, as organizations look for new and innovative ways to improve speed without compromising quality or cost.

The days when organizations could just ask employees to work harder are long gone - or mostly gone - the focus now is on working smarter. From no-code to low-code, IT departments around the world have tried to squeeze efficiency out of all kinds of innovative technologies.

Yet there's a more fundamental, and far simpler, way of working smarter.

Just improve the way software development is integrated into the rest of the organization - not from a technical standpoint, but from a business and cultural perspective.

Traditional agile development focuses on software that delights customers, and that's good. But it's far more important to develop solutions that delight both customers and your own organization.

You need a solid foundation, a loyal team, to keep customers happy. You need to understand not just what customers want, but what your organization wants, why it wants it, and how it supports the business.

Give your teams that context, and they'll give you better solutions all round.
</description>
      </item>
      <item>
         <title>The Roles of Thought and Error in Accidental Internal Threats to Data Security</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/roles-thought-error-accidental-internal-threats</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/roles-thought-error-accidental-internal-threats</guid>
         <pubDate>December 14, 2017</pubDate>
         <description>Data security professionals categorize threats as internal or external, with the former category breaking down further into accidental internal threats and deliberately malicious internal threats. Errors stem from how people think, and how they think is influenced by personal and situational factors. Data security professionals address both factors, yet room for improvement exists. Fast vs. Slow: Thinking Impacts Responses Our brains allow us to choose how to think about a situation based upon its various features. Novel concepts, intriguing topics, or the luxury of time encourage people to think more carefully. In turn, this leads to fewer errors in action. On the other hand, familiarity with concepts, personal or work stressors, and lack of time might force people to take mental shortcuts that allow for quick responses. Such mental shortcuts present opportunities for errors in daily and infrequent tasks alike. Pinpointing Opportunities for Errors: Tasks and Tools Errors ultimately result from a combination of people and situations, meaning data security professionals must act on multiple fronts. Employees receive education about social engineering tactics and security policies procedures, however complexity in a security policy can leave unaddressed opportunities for mistakes that pose threats. Complex procedures may tempt savvier individuals to reduce complexity by skipping steps or leaving out information, while less experienced users may forget steps or perform them incorrectly. Therefore, it's critical for organizations to evaluate whether their required security procedures do more to encourage risky behavior than prevent it. Procedures are not the only potential pitfalls for ensuring data security, though. Tools for interacting with data, even those meant to help secure it, can introduce complexity. If people apply tools to tasks they do not fit, the simple act of using the tool endangers the data. In such cases, employees may avoid using the tool and opt for one</description>
      </item>
      <item>
         <title>Broadcom's 28nm Technology: Greater performance and less power consumption is only the beginning</title>
         <link>https://www.broadcom.com/blog/broadcom-s-28nm-technology-greater-performance-and-less-power-consumption-is-only-the-beginning</link>
         <guid>https://www.broadcom.com/blog/broadcom-s-28nm-technology-greater-performance-and-less-power-consumption-is-only-the-beginning</guid>
         <pubDate>October 1, 2012</pubDate>
         <description>When Broadcom unveils innovative new technologies, such as todays introduction of the worlds first 28-nanometer multicore communications processor, its easy to focus on the major benefits.Consider that the technology performs up to 400 percent faster but consumes up to 60 percent less power and is optimized for service providers, enterprise data centers and cloud computing, as well as software defined networking environments. Those are all great talking points but Broadcom's new XLP 200-Series is about so much more.For the company itself, the announcement marks Broadcom's successful integration of NetLogic Microsystems technologies while expanding the addressable market within the $3 billion communications processor market. More importantly, for end-users the network administrators and IT experts the technology that Broadcom now offers zeroes in on a subject thats been top of mind lately: Security.Protecting the network is always mission critical, but in recent days, the subject has grabbed headlines as cloud providers, social networking sites and retail banks struggle to fend off malicious cyber-attacks on their websites. The XLP 200-Series is the first multicore communications processor that includes on-chip security features that gives network managers the power to thoroughly inspect, encrypt, authenticate and secure Internet traffic at wire speeds. This translates into the ability to better protect enterprise, data center and cloud networks from malware and intrusion threats at the packet level. Key integrated security features include: a grammar processing engine that parses through data packets by fields, protocols or positions and assigns each parsed content to the appropriate database. a fourth generation regular expression (RegEx) search engine, which searches packet content against a large database of security threats. a broad range of autonomous encryption and authentication processing engines to deliver comprehensive Layer 7 deep-packet inspection (DPI) capabilities. complete offload of the compute-intensive security functions from the CPU cores. While the technology may</description>
      </item>
      <item>
         <title>Why Modern Application Monitoring Needs 15 Second Granularity</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/why-modern-application-monitoring-needs-15-second-granularity</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/why-modern-application-monitoring-needs-15-second-granularity</guid>
         <pubDate>September 17, 2017</pubDate>
         <description>You have certainly heard by now about the importance of high performing applications and how less-than-perfect end user experience can impact your brand, revenue, user satisfaction and user retention. What does that mean in practical terms? How fast is fast? How slow is slow? Well, most industry experts agree that slow begins around three seconds. When an end user launches a mobile app or they bring up your website and the response time exceeds 3 seconds, you begin seeing users drop. In fact, studies show that 40% of people abandon a website that takes more than 3 seconds to load. This means that identifying an underperforming business transaction impacting your customers must be done in a timely, efficient manner. Some application monitoring tools report response times at a 1 minute refresh rate. &quot;How much more real-time do you need?&quot;, they claim. Let's explore that for a moment. Assuming you're reading this on a desktop browser, grab your smart phone for a quick experiment. Start the Stopwatch. Then open an app like your mobile banking app. Login. Check your balance. Logout. Then check the time. I can perform those four business transactions of Login, Select Account, View Balance, and Logout in about 25 seconds start to finish. (This is not an ad for my bank, by the way, so I'll leave the name out!) Anyway, that means each step, each business transaction, for me took about 2-3 seconds with a few seconds in between each for my navigation. What was your experience? Now consider this (maybe even start the stopwatch to let it continue to 1 minute). You'd be surprised how long those next 35 seconds seem to take! Now imagine you're monitoring those business transactions and the first step of Login and authentication is slow - maybe it even fails</description>
      </item>
      <item>
         <title>Staying Ahead Of Microsoft Office 365 Outages - AI-Driven IT Op</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/staying-ahead-of-microsoft-office-365-outages-ai-driven-it-op</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/staying-ahead-of-microsoft-office-365-outages-ai-driven-it-op</guid>
         <pubDate>December 29, 2017</pubDate>
         <description>When an enterprise moves its mission-critical office application to Microsoft Office 365, there is a leap of faith that Microsoft's data centers hosting the servers and the Internet connection will remain available and perform to the mark. Most frustrating issue for SaaS application customer is that he is at the mercy of the service providers to know of any outages. The service provider may not know of an outage affecting your tenancy, may not post notifications on their service portals timely or frequently enough - and as an IT admin, you are helpless fielding calls from frustrated users. Outages happen in and Microsoft Office 365 is no exception, and what IT admin want is let him know when it does. This gives him a chance to get in front of an outage instead of letting it run over himself when he starts fielding tickets from angry users. Given the vivid nature of stakeholders interested in the O365 status: Ops Engineer in IT who's responsible for day to day operations Reseller who wants to know the health of the tenancies they sold Helpdesk professional who wants to keep a tab on general service issues Support engineer who is working with a given Office 365 user regarding their reliability issues Consulting professional working on an Office 365 solution Global Administrator at company who is typically first to help his colleagues on their tech problems All-around admin at the small &amp; medium enterprise who takes care of everything from sales to IT Like any other SAAS application, maintenance and management is the domain of the service provider. Most frustrating issue for SAAS application customer is that they are at the mercy of the service providers to know of any outages. The service provider may not know of an outage affecting your tenancy, may</description>
      </item>
      <item>
         <title>Common Cloud Monitoring Pitfalls ITOps Teams Need to Avoid</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/common-cloud-monitoring-pitfalls-itops-teams-need-to-avoid</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/common-cloud-monitoring-pitfalls-itops-teams-need-to-avoid</guid>
         <pubDate>December 19, 2017</pubDate>
         <description>In order to speed innovation, reduce costs and enhance agility, many business executives are opting to move their applications into public cloud environments. Whether or not organizations realize these advantages to the fullest extent possible will in part be dictated by the cloud monitoring capabilities in place. Here are some common pitfalls that IT operational teams need to avoid: Undermining The Value Of Holistic Cloud Monitoring As more and more applications run in the cloud, proactive and holistic monitoring of cloud infrastructure is becoming a necessity. Holistic monitoring doesn’t stop at the infrastructure level it’s the ability to get deeper insights into the applications and processes running in the cloud. For example if you are running Apache on a VM hosted in a cloud server, you need insights across all three layers to troubleshoot a performance issue. Inability To Proactively Track Cloud Utilization Without proactive insights into cloud utilization, organizations run the risk of spending on capacity they don’t need. In addition organizations need to analyze historical data to not only better plan for future capacity and budgets but also provide insights to development teams for better application designs on the cloud. Using Traditional Monitoring Configuration Techniques Cloud environments are highly dynamic in nature, continuously going up and down. Traditional static, manual monitoring configuration approaches would be too time consuming. Monitoring tools for the cloud need to provide standardized, elastic configuration for these environments that allow rapid deployment with minimum human intervention. Limited Insights Into Cloud Migration Success As organizations move applications to the cloud, they need to ensure that these migrations happen reliably. By doing so, staff can most effectively ensure that no errors or performance issues arise. Eventually, they need to be able to compare pre and post- production performance metrics so they can continue to optimize service</description>
      </item>
      <item>
         <title>Discover New Enhancements in CA Application Performance Management</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/discover-new-enhancements-in-ca-application-performance-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/discover-new-enhancements-in-ca-application-performance-management</guid>
         <pubDate>November 13, 2017</pubDate>
         <description>Maximize the Customer Experience with CA APM Team Center As an Education Program Director at CA Technologies, one of my passions is to enable our customers, partners, and employees with CA products. I plan and execute technical learning strategies and objectives for CA Application Performance Management. In my humble opinion one of the coolest products within our portfolio - we have many! Why is CA APM so cool? CA APM is designed to help IT deliver critical business services with greater efficiency while maintaining a seamless customer experience. CA APM also creates a multi-tiered view of application architecture across all environments – helping you find and fix problems before they impact your business, so you can be confident you’re delivering the experience your customers expect. Sneak peek: What's New in CA Application Performance Management In true education manner - we've developed this engaging (and short) microlearning video. It will take you through the new enhancements in the latest version of CA APM, as well as, the many prior amazing features added over the past year. Start your learning experience by watching this short video: Take Advantage of the New Enhancements - Upgrade Today Now that you’ve had a first look into some of the new features in CA Application Performance Management – are you ready to learn more? If you are an existing CA APM customer, don’t miss out on these new features and upgrade today. Don't know where to start on your upgrade journey? No worries, we've made it easy to upgrade. Watch this step-by-step upgrade playlist and contact Upgrade services to help you get started today! Already have CA APM and want more training? CA Education can help you empower your teams – checkout our dedicated CA APM Learning Path. I hope I've given you enough education to</description>
      </item>
      <item>
         <title>For Network Monitoring Software, Silence is Golden</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/for-network-monitoring-software-silence-is-golden</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/for-network-monitoring-software-silence-is-golden</guid>
         <pubDate>February 26, 2018</pubDate>
         <description>In a software-defined world, fault isolation is the key to network monitoring software success. With networks now responsible for billions of internet searches every day, millions of video streams every hour, and millions of texts every minute, the network is the backbone of every organization today. Now with the evolution of network functions virtualization (NFV) and software-defined networking (SDN), the difficulty for network monitoring software is exponentially more complex. At the core of NFV and SDN is an amazing amount of flexibility for the run-time state of the system and the policies that the system and user create to manage these processes. As network monitoring teams deliver services and automated orchestration systems make changes, these conditions are detected and thousands of faults are automatically created. This results in overwhelmed network monitoring teams. Figure 1: Actionable alarms minus the noise improves triage times. While fault management is not new or unique to NFV and SDN architectures, the volume and frequency of faults places a much higher value on &quot;fault isolation&quot;. One popular software-defined data center solution, Cisco Application Centric Infrastructure (Cisco ACI), has identified more than 23,000 conditions that will trigger a fault object. While the record of this information can be important to the overall management of service delivery, performance of the service, capacity planning, application usage, user load and data volume... it can be overwhelming to even the largest of network operations teams. Fault as the new KPI to consider Thus, fault isolation in modern network monitoring software becomes a critical consideration for managing NFV and SDN environments. The ability to sort out informational faults from service-impacting faults, while not new or unique to these new networking architectures, is a new key performance indicator (KPI) that network monitoring teams need to consider. Fault isolation is a critical KPI. Figure</description>
      </item>
      <item>
         <title>Leverage IoT adage &quot;Build Once, Use Many&quot; to Scale IT Operations</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/leverage-iot-adage-build-once-use-many-to-scale-it-operations</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/leverage-iot-adage-build-once-use-many-to-scale-it-operations</guid>
         <pubDate>February 4, 2018</pubDate>
         <description>When approaching a situation like the disruptions currently being felt within a rapidly evolving yet also highly mature and specialized IT landscape, completeness of vision is essential to maintaining progress within everchanging paradigms. Creating a process and crafting a practice allows for the tough lessons we need to not go by in vain. By leveraging the IoT adage &quot;Build Once, Use Many&quot;, effective organization find methods to quickly scale out operations allowing for much tighter roll-outs and ongoing operations. Comprehensive Coverage and Framework Is Critical The ability to deliver repeatable results over different technologies and doctrines is part of the advantage Unified Infrastructure Management (CA UIM)​ delivers. Forming a framework for ingesting and configuring new pieces of technology is the cornerstone of this and for CA UIM it's elementary. Every device is quantified from the thinnest container or bare-metal hypervisor to the most massive Z-implementations possible, CA UIM does in many ways what .NET did for the programing world by unifying so many disparate pieces-parts into a semi-homogenous &quot;source&quot; for CA UIM data that can be carved into consumable morsels of relevant information. Effectively laying the groundwork not only positions the effective organizations I've worked with to be able to scale, it allows them the ability to flex when the business needs to shift 180° overnight as they inevitably always need to. For example, when a large financial services customer I worked with received word that their entire VBlock implementation was going to be sunset in lieu of this new technology called &quot;OpenStack&quot; which &quot;wasn't really virtualization but something bigger&quot;, they found success in CA UIM's ability to support the platforms needed. The initial core deployment massive shifts possible. If your structure isn't sound, even the best polish won't outshine instability whether extracting deep insights or responding through automated actions,</description>
      </item>
      <item>
         <title>How to Identify What's Slowing Down Your Website with Analytics</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/how-to-identify-what-s-slowing-down-your-website-with-analytics</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/how-to-identify-what-s-slowing-down-your-website-with-analytics</guid>
         <pubDate>February 2, 2018</pubDate>
         <description>It’s no secret that today’s users demand speed. Every extra second your website takes to load will cause users to drop and potentially send them running to your competition. But identifying what’s causing your website to be slow and for which users is often a mystery. Today’s websites can be made up of hundreds of components making calls to different backend systems. And it’s not just your code and infrastructure anymore – advertisements, content delivery networks and other 3rd party integrations can have a major impact on your site performance. If one of these components is down or slow, your customers may feel like they’ve been transported back to dial-up days while they wait for your website to load. Identifying what caused slow load times for your customers can be difficult to troubleshoot. Today, resource load time testing is usually done through synthetic monitoring - basically pinging your site from various locations around the world to determine load times. While synthetic monitoring is a good start, it has some drawbacks. The biggest gap is that you can't get a real understanding of what your customers are experiencing. If a user experiences an issue that isn't captured by your synthetic monitoring tool, your team will get support tickets and then try to replicate exactly what the user was doing, often relying on information from the user which delays mean time to resolution. And even if you have synthetic monitoring in place, the issue may not present itself. For example, a user may encounter tailored content on your site based on their browsing history. The content and advertisements they are served may be unique to them which makes for a great customer experience, if the site loads properly. Synthetics may not be enough to capture all of the elements you serve up</description>
      </item>
      <item>
         <title>How to Monitor Cisco SD-WAN with CA Network Monitoring Tools</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/how-to-monitor-cisco-sd-wan-with-ca-network-monitoring-tools</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/how-to-monitor-cisco-sd-wan-with-ca-network-monitoring-tools</guid>
         <pubDate>August 8, 2018</pubDate>
         <description>If you were wondering if software defined networking (SDN) was really a driving force yet in the industry - Cisco recently announced that they are ready to upgrade a million routers with their SD-WAN software. SD-WAN introduces software-defined intelligence to regulate the enterprise WAN for optimal application experiences. Yet, the enterprise needs to monitor and validate this intelligence along with their traditional network for full assurance without adding any more complexity to their day-to-day monitoring activities then there already is. With the recent Cisco announcement, it is more imperative today then every before that network teams adopt a comprehensive and unified approach to monitoring traditional WAN and SD-WAN environments. CA's Network Operations and Analytics solution is a unified, full-stack monitoring and analytics platform for assuring traditional and software-defined networks and provides: SDN relationship mapping that enables easy VNF management Validation of traffic decisions made by SD-WAN intelligence Easy troubleshooting workflows to assure SD-WAN health Unified monitoring of SD-WAN and traditional WAN So how do you start monitoring your Cisco SD-WAN environment with CA? We break it down for you here: To discover your Cisco SD-WAN environment, configure the SDN monitoring plugin to monitor inventory and performance across your vEdge routers, interfaces, tunnels, application and SLA paths. Once configured and discovered, CA's network monitoring tools reveal performance metrics on a variety of instances including CPU, memory and disk utilization, NetFlow statistics, jitter, latency and packet loss and many others. The following JSON example shows a Vitpela plug-in configuration: { “PLUGIN_CONFIG”: { “VMANAGE_IP”: “10.241.1.5”, “VMANAGE_PORT”: 8443, “VMANAGE_USER_NAME”: “admin”, “VMANAGE_PASSWORD”: “admin”, “PROTOCOL”: “https”, “INVENTORY_POLL_RATE”: “0 */10 *”, “INVENTORY_DELTA_TIME”: 600, “PERFORMANCE_POLL_RATE”: “0 */30 *”, “PERFORMANCE_DELTA_TIME”: 1800, “PERFORMANCE_REQUEST_COUNT”: 1000, “VEDGE_PERFORMANCE_SAMPLE_INTERVAL”: 300, “INTERFACE_PERFORMANCE_SAMPLE_INTERVAL”: 300, “TUNNEL_PERFORMANCE_SAMPLE_INTERVAL”: 300, “TIMEZONE”: “GMT”, “AVAILABILITY_POLL_RATE”: “0 */5 *”, “AVAILABILITY_DELTA_TIME”: 300, “NOTIFICATION_POLL_RATE”: “0 */1 *”, “NOTIFICATION_DELTA_TIME”: 60, “MAX_NOTIFICATION_COUNT”: 10000, “DOMAIN_ID”: 0 } }</description>
      </item>
      <item>
         <title>When the Blame Game Gets Real</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/when-the-blame-game-gets-real</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/when-the-blame-game-gets-real</guid>
         <pubDate>April 4, 2018</pubDate>
         <description>How to put an end to IT blame games with a single pane of glass monitoring solution In today's world of IT operations there is a major gap in the ability of monitoring tools to cross functional boundaries and allow for team collaboration. The tools are operating in silos and have failed to provide cross tool correlation from the application to the infrastructure (also known as &quot;app to infra&quot;). This lack of correlation results in never ending bridge calls for severity 1 issues where development and multi-disciplinary tool teams battle it out to arrive at root cause. Under the gun they are racing against time to arrive at a simple conclusion, is it the end user device, application, infrastructure or the network that is causing the issue. And this is &quot;When the Blame Game Gets Real&quot; (and for some a great photo op). Figure 1: Photo and caption credits to “APM geek extraordinaire” Pavan Aripakula – perfectly captured a chaotic situation brewing in our training war room. The &quot;blame game&quot; gets exponentially worse when faced with modern technology stacks due to the proliferation of microservices, additional complexity of container technologies, hybrid cloud deployments, integration with legacy stacks and high velocity of change introduced by automation and the continuous delivery pipeline. The Challenge Modern technologies like Docker containers are making it easier to build, automate and scale applications from development, testing to large-scale production deployments. However, Docker by itself is not enough because you also need to deploy and manage these containers. This is where technologies like OpenShift and Kubernetes amongst others play a major role to operationalize Docker and help with development, deployment, scaling, lifecycle management &amp; orchestration. While organizations going through a digital transformation are building next generation applications on these new technology stacks they also are not doing</description>
      </item>
      <item>
         <title>Part 1: Is Your Network Monitoring Solution Application Aware?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/part-1-is-your-network-monitoring-solution-application-aware</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/part-1-is-your-network-monitoring-solution-application-aware</guid>
         <pubDate>March 19, 2018</pubDate>
         <description>Bridging the Gap with Application-Aware Network Performance Monitoring and Diagnostics (App-Aware NPMD) Today, a large number and wide range of applications are running on enterprise networks, including latency-sensitive voice over IP (VoIP) and video streaming traffic, critical business applications, and more. The increase in the number of applications and the varied nature of application traffic traversing the network places increased demands for network monitoring and make it all the more challenging to manage network performance. Many administrators are trying to manage their networks with performance and availability network tools. While these tools are fine for managing network devices and links, they don't deliver the fundamental insights administrators need to understand application response and network flow. With this limited visibility, administrators can't truly track and optimize application performance, and as a result, organizations suffer from poor service levels, suboptimal configurations and investments, and inefficient operations-which can all have a significant impact on business performance. Lacking this network monitoring and application-level visibility, your organization's IT and operations staffs are apt to contend with significant challenges, such as: Lower priority and personal user activities consume excessive resources, while the performance of critical business services suffers from costly outages and significant performance issues. It takes a long time to isolate and troubleshoot application performance issues. It is difficult to understand how network changes and new infrastructure investments will affect different applications, leading to unintended degradations and outages. Money is wasted on underutilized infrastructure. All these challenges can have a significant negative business impact, potentially eroding user productivity, revenues and customer loyalty. That's where an App Aware NPMD solution is critical. With this network monitoring solution, you can integrate the vital entities of application and underlying network infrastructure and provide complete network monitoring visibility into business-critical applications and their dependencies. An AA-NPM solution provides: Improved</description>
      </item>
      <item>
         <title>Network Software for Reliable Application Response in the Cloud</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/network-software-for-reliable-application-response-in-the-cloud</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/network-software-for-reliable-application-response-in-the-cloud</guid>
         <pubDate>April 30, 2018</pubDate>
         <description>Joint CA Technologies and Ixia network software empowers NetOps with pathway to the cloud assurance to ensure optimal end-user experiences. Today, business leaders can move their IT services to the cloud without consulting or even notifying IT operations -- that is until an issue arises. While this has become common, operations teams are still responsible for finding and fixing problems via their network software, remaining accountable for not just performance and security but also for the end user experience. The challenge of maintaining control does not stop with the initial deployment. Feedback from IT Ops teams I've talked to that are involved in deploying applications into the cloud have cited lack of visibility or €˜cloud blindness' as one of the biggest challenges they face. They admin that they do not understand how applications behave in the cloud. A majority of them feel that the data shared by cloud providers does not help them with the right visibility to optimize cloud delivery of applications. Notwithstanding, moving applications to hybrid infrastructure creates visibility gaps for application and Network Monitoring teams as they may no longer have the operational insight needed to be effective. Yet it is critical that applications continue to deliver high levels of responsiveness and availability-at all times, no matter if the application is deployed in the data center, private cloud, public cloud, or a combination of all three. Figure 1: Joint network software from CA Technologies and Ixia for comprehensive monitoring of packet data in the cloud. To address this challenge, CA Technologies and Ixia have partnered to develop network software best practices to monitor packet data in the cloud. The joint CA Application Delivery Analysis and Ixia CloudLens™ solution delivers pathway to the cloud assurance and provides the end-to-end response time capabilities needed to track and optimize the</description>
      </item>
      <item>
         <title>Unified NetOps Explained</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/unified-netops-explained</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/unified-netops-explained</guid>
         <pubDate>March 20, 2019</pubDate>
         <description>Detailing our journey to unify the CA network monitoring products into a single NetOps tool that delivers operational simplicity and awareness to all our customers. Over 25 years ago, CA pioneered the NetOps industry with best of breed network monitoring solutions in fault and performance management of traditional networks. You know them as CA Spectrum, the first in delivering alarm correlation and isolation and CA Performance Management which evolved from CA eHealth, the most scalable big data solution in the market. Then CA Mediation Manager for Non-SNMP devices like fiber optics, wireless backhaul, and cable HFC. As networks became more congested, we added detailed traffic analysis via CA Network Flow Analysis. Then TCP and application response analysis via CA Application Delivery Analysis to address application performance in the network. More recently we launched our assurance solution for software-defined network monitoring called CA Virtual Network Assurance. These were and still are the “best in class” network monitoring solutions. But as we all know now, networks today are consolidating to serve the needs of the digital economy. Networks today share resources and are lean with nothing wasted. They are converging and are more dynamic and complex than ever before. From mobile to cloud, the lines are blurred for traditional networking and now a more intelligent network monitoring platform is needed. In order for you to have the visibility you need to manage and triage these new networks, we have converged our network monitoring solutions into CA NetOps and it is complemented with our AIOps artificial intelligence and machine learning platform. CA NetOps converged architecture unifies all your monitoring metrics into an easy to consume portal that scales to meet the dynamism of modern architectures and delivers end-to-end coverage across any network. So how are we doing it? Let’s break it down here…</description>
      </item>
      <item>
         <title>REDIS Datastore Monitoring through CA Unified Infrastructure Management</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/redis-datastore-monitoring-through-ca-unified-infrastructure-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/redis-datastore-monitoring-through-ca-unified-infrastructure-management</guid>
         <pubDate>March 6, 2019</pubDate>
         <description>Redis is one of the most popular in-memory databases and is well known for its high performance and capabilities for querying, replication, high-availability and automatic partitioning. It supports different data structures, including strings, lists, maps, sets, streams and spatial indexes, etc. Redis is well suited for scenarios where there is a need for processing high-volume traffic from multiple sources and complex data sets in near real time. Typically, Redis is used as a memory cache, message queue and database. Redis also has a built-in replication mechanism among nodes providing high availability and automatic partitioning using Redis-cluster. The Redis implementation makes heavy use of the system calls where the child process takes care of writing data to persistence storage while the parent process continues to service clients. Since Redis can provide a large volume of metrics, it is critical to choose the right set of metrics that are essential for managing the overall system performance and health without over-burdening the monitoring tool. CA UIM provides comprehensive monitoring of the Redis infrastructure, including stand alone, clustered and remote deployments. Combined with CA Operational Intelligence, CA UIM offers predictive insights around performance anomalies, alarm filtering and predictive capacity analytics to help IT admins identify potential issues proactively. The section below provides a high-level view of metric categories that CA UIM provides for monitoring the Redis application and its corresponding infrastructure. Resource Utilization Metrics Utilization metrics help identify any bottlenecks with system resources such as CPU and memory are being utilized. It helps triage anomalies using the out-of-the-box monitoring configuration templates that are provided with standard thresholds applied. With memory being the critical resource for Redis performance, metrics such as peak usage and fragmentation ration help manage its overall performance. Latency and Performance Metrics Cache Hit Ratio is one of the critical parameters to</description>
      </item>
      <item>
         <title>Broadcom Recognized as a Leader in the 2019 Gartner Magic Quadrant for Application Performance Monitoring - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/broadcom-recognized-as-a-leader-in-the-2019-gartner-magic-quadrant-for-application-performance-monitoring-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/broadcom-recognized-as-a-leader-in-the-2019-gartner-magic-quadrant-for-application-performance-monitoring-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>March 26, 2019</pubDate>
         <description>By: Ali Siddiqui, Head of AIOps Segment, Broadcom For a second consecutive year, Gartner named Broadcom (CA Technologies) as a Leader in the Gartner Magic Quadrant for Application Performance Monitoring. We believe this recognition validates our strong AIOps vision and speed to deliver innovation that meets the demands of our customers and uniquely addresses market pressures. We see the boundaries of APM expanding well beyond traditional metrics and transaction monitoring. While these capabilities form the foundation of most APM tools today, we see customers struggling with broader problems such as digitally transforming legacy business practices using Agile, DevOps, continuous delivery, as well as monitoring user experience, and complex environments like containers and Kubernetes. To successfully achieve these objectives, APM tools must go beyond the confines of traditional monitoring of production applications to include intelligence, analytics, and self-remediation. Today's IT leaders are asked to monitor new modern application and infrastructure architectures deployed across distributed cloud environments in conjunction with their existing IT environments. However, the ever-increasing volume, velocity, and variety of data have made it difficult to triage issues, reduce downtime, and improve performance. It's now imperative to have comprehensive visibility across both modern and traditional IT environments coupled with AI and machine learning to correlate and analyze data and alerts for the entire ecosystem. As a result, we see the need for an AIOps solution as seen with our new product innovations delivered over the past year. We believe our AIOps solution is today one of the most compelling products in the market -- catering to a much larger audience with a highly-differentiated set of capabilities, built for the largest scale our customers need. Our strengths and differentiation uniquely position us to win: Broadcom is the only vendor in the market to combine app, infrastructure, network monitoring and machine learning</description>
      </item>
      <item>
         <title>Deploy your network monitoring software in minutes</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/deploy-your-network-monitoring-software-in-minutes</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/deploy-your-network-monitoring-software-in-minutes</guid>
         <pubDate>May 12, 2019</pubDate>
         <description>CA Spectrum dockerization enables you to bring up your network monitoring software in a snap. Container adoption is growing, that is a fact, so why not deploying your network monitoring software with docker? Portability, efficiency and deployment speed are the key benefits of dockerized CA Spectrum. No more long lists of pre-requisites, large footprint and time consuming upgrades. Having CA Spectrum running in a docker container offers multiple benefits that can be leveraged by your NetOps team with no additional effort. Let's take a look at how to do it. This technical post explains how to deploy a dockerized CA Spectrum instance in minutes by instantiating an all-in-one image that contains a OneclickServer and SpectroServer. No matter if deployed by an orchestrator or if running as a standalone container, dockerized CA Spectrum will enable you to build a network monitoring software sandbox environment to quickly showcase the solution, test an integration or use it as a network data collector. All we need to get started is a machine with docker installed. If you simply want to kick off a dockerized CA Spectrum instance run: docker run -it --name ls1 -e LANDSCAPE_HANDLE=128 -e IS_MLS=yes -e ROOT_PASSWORD=.qaperf184 -e TOMCAT_PORT=8080 -p 9090:8080 isl-dsdc.ca.com:5000/tools-ca-com/ssocsimage:10.3.2 Make sure you specify the password to access your OneClick instance. In case you want to start CA Spectrum with an existing database backup, you will need to create a docker volume first: docker volume create my-vol Then, copy your database backup to the mount point of your volume, that can be obtained by running: docker volume inspect my-vol Create a folder inside of the mount point first, then copy the database backup: mkdir -p /var/lib/docker/volumes/my-vol/_data/spectrum/ls1 cp db_20190220_1410.SSdb.gz /var/lib/docker/volumes/my-vol/_data/spectrum/ls1 When the container starts-up, it will pick this backup database up from the mount point and load it. So let's get</description>
      </item>
      <item>
         <title>The Latest Release of CA NetOps Delivers Superior SD-WAN Fault Management</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/the-latest-release-of-ca-netops-delivers-superior-sd-wan-fault-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/the-latest-release-of-ca-netops-delivers-superior-sd-wan-fault-management</guid>
         <pubDate>April 29, 2019</pubDate>
         <description>The top reasons to upgrade to the latest release of CA NetOps fault management to ensure successful SD-WAN deployments and happy customers. The modern workforce is increasingly mobile, and business-critical applications are running over the Internet across multiple clouds. Traditional WAN architectures can't keep up because of lack of available bandwidth, limited security, and increased complexity; which prevents IT from responding to business needs faster. SD-WAN adoption is seeing remarkable growth as companies seek to streamline their WAN infrastructure and evolve toward more cloud-based applications. This is why: Reliable application delivery. Predictable application SLAs with real-time policy enforcement and active-active links for all scenarios. Best in class integrated security. Zero-trust foundation with authentication, segmentation and encryption. Cloud optimized. Seamlessly extend the WAN to multiple public clouds with real-time optimized performance for major SaaS applications. SD-WAN will challenge today's network operations teams The expansion of cloud based applications and infrastructures coupled with non-guaranteed connectivity requires visibility and assurance across the WAN to ensure end-user experience of critical applications. Software-defined overlays add additional complexity which requires understanding and correlation of the Control and Data plane infrastructure. SD-WAN introduces intelligence which needs to be validated and refined as the maturity of the technology and deployments increase. CA NetOps 19.1 SD-WAN fault management addresses NOC challenges CA NetOps 19.1 SD-WAN fault management (CA Spectrum) addresses these challenges by providing in-depth visualization with underlay &amp; overlay topology. CA Spectrum directly manages the vEdge routers using SNMP and reconciles both SNMP &amp; Cisco Viptela and Versa controller information. Full insights into how different sites are connected, including policies and tunnel information. Easily view degraded or failed connectivity. Overlay topology enabled for Cisco Viptela vSmart (controller) provides vEdge router WAN-link connectivity to provider network (gold/silver/mpls). Visualize how different vEdges are connected and their corresponding Transport and SLA</description>
      </item>
      <item>
         <title>Better Together: AIOps &amp; Automation - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/better-together-aiops-automation-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/better-together-aiops-automation-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>May 19, 2019</pubDate>
         <description>See how AIOps &amp; Intelligent Automation can help fuel better customer experiences at Gartner IT Infrastructure, Operations and Cloud Strategies Conference For today's businesses, delivering optimized, high-value digital experiences isn't just important, it's a matter of survival. Customer's predominantly interact with businesses through digital channels and expect nothing short of a problem-free experience - meaning the experience you deliver can truly make or break your business. Consequently, the pressures on IT teams continue to mount as they strive to track and manage service levels, while contending with the increasingly dynamic, hybrid, and distributed nature of their computing environments. As a result, organizations are rapidly adopting AI first strategies. But, how do these AIOps technologies truly enable a new level of digital experience? Combating the Challenge of Too Much Data Dealing with the increased volume, variety and velocity of data coming from today's modern technologies like cloud, containers, software-defined networks has brought about major challenges for IT teams trying to maintain and deliver optimal service levels. And while it seems delivering optimized service levels only continues to get more critical, meeting this objective only continues to get more challenging. In order to silence the noise and create apps that are truly self-healing, IT teams need systems that can automate problem recognition and the execution of multiple corresponding steps to fully remediate issues across complex hybrid IT environments. The only way to achieve this is to shift from a point tool monitoring approach to a platform oriented approach in order to eliminate the current silos of monitoring and automation tools. CA Technologies, A Broadcom Company offers a combined AIOps and Intelligent Automation solution which provides teams with the proactive, automated remediation capabilities needed to fuel superior user experiences, while offering fundamental breakthroughs in scale and efficiency. If you'll be attending Gartner's IT</description>
      </item>
      <item>
         <title>NetOps 19.1 Delivers Web-based OneClick Interface for CA Spectrum Users</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/netops-19-1-delivers-web-based-oneclick-interface-for-ca-spectrum-users</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/netops-19-1-delivers-web-based-oneclick-interface-for-ca-spectrum-users</guid>
         <pubDate>April 22, 2019</pubDate>
         <description>CA Spectrum OneClick Java thick client is now web-based and allows all administrative and operational capabilities and features via a new HTML5 interface and the NetOps Portal. Complex network environments require multiple network operators that diligently keep track of the network ensuring a continued experience for their users and customers. These networks use network monitoring applications like CA Spectrum that rely on server client architecture. Monitoring applications which are viewed by network operators open up on the client using Java /Jnlp such that they are able to cater to multiple operator personas. While this is a wonderful way to ensure that multiple operators continue to access this network monitoring application, there are associated challenges around regular updates, security vulnerabilities and restarts. With CA NetOps v19.1, we are proud to introduce the CA Spectrum WebApp (powered by Webswing) which reinforces our focus on ease of use and our commitment to overcomes challenges of opening OneClick client with Java. The WebApp recreates an exact instance of the webswing application using HTML5 and users no longer need to download a jnlp file or wait for java updates before starting the CA Spectrum OneClick client. In order to make workflows easier for operators and admins, we have enabled access to the WebApp from within the NetOps Portal and from CA Spectrum start page. OneClick WebApp link on CA Spectrum interface OneClick WebApp link in HTML5 based NetOps Portal Let us now compare some views of the WebApp and Java client which reinforce that the capabilities a user enjoys in the Java client continue to be available in the web interface. Webswing includes several administrative capabilities which add value to the Network Admin role. The admin capabilities allow the network or tool administrator a view of session statistics and session control with the ability to</description>
      </item>
      <item>
         <title>New, Modern User Interface in CA UIM 9.0.2 - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/new-modern-user-interface-in-ca-uim-9-0-2-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/new-modern-user-interface-in-ca-uim-9-0-2-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>February 11, 2019</pubDate>
         <description>As many know, Adobe is killing Flash by 2020. For this reason, many popular web browsers are taking away support for Flash. Moreover, Flash has its own disadvantages in terms of security and its inflexibility around enterprise grade UI creation. This presented the CA Unified Infrastructure Management product team with an opportunity to come up with a new user interface for the operator persona in the recent 9.0.2 release. The Operator Console, available in the Unified Management Portal, provides users with an alternative way to manage membership in devices, groups, and device monitoring profiles, and view dashboards and alarms. The Operator Console provides you with a graphical, clickable means to navigate through system operations and monitoring results. Summary views of monitored technologies, devices and groups, and alarms are linked to in-depth views of system components and metrics. The product team has built a new operator console for the Server Administrator, NOC Operator, and the Service Provider persona, using the following UX principles: Design Should Focus on an Experience The Operator Console effectively weaves together a combination of text, graphics, layout, and interactive elements to ensure users have an experience, not just an informational view. With the number of infrastructure monitoring tools, and the complexity and quantity of information that comes with them, the product team made sure to segregate the information and provide contextual navigation when creating the Operator Console UI. People Scan Screens, They Don't Read Them. This is extremely applicable for an ITOM tool such as CA UIM that provides out of the box monitoring support for various technologies and services. The operator's main responsibility is to make sure the data center is up and working. They need to &quot;follow the red&quot; to triage the issues in underlying components. The Operator Console allows the user to quickly start</description>
      </item>
      <item>
         <title>PODCAST: AIOpsology 101, A Discussion with Ali Siddiqui, Head of AIOps at Broadcom - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/podcast-aiopsology-101-a-discussion-with-ali-siddiqui-head-of-aiops-at-broadcom-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/podcast-aiopsology-101-a-discussion-with-ali-siddiqui-head-of-aiops-at-broadcom-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>July 9, 2019</pubDate>
         <description>



Ali Siddiqui is head of the AIOps and monitoring segment for the Enterprise Software Division at Broadcom driving DX APM, DX NetOps, and AIOps solutions. Responsibilities include R&amp;D, product marketing, and the CSA team which drives adoption of our solutions for customers.
</description>
      </item>
      <item>
         <title>Broadcom open networking solutions – a history of industry-leading innovation</title>
         <link>https://www.broadcom.com/blog/broadcom-open-networking-solutions-a-history-leading-innovation</link>
         <guid>https://www.broadcom.com/blog/broadcom-open-networking-solutions-a-history-leading-innovation</guid>
         <pubDate>April 5, 2018</pubDate>
         <description>Every open source program defines success a little differently. Goals will vary according to the reasons each company or organization chooses to invest in open source — whether it’s to recruit developers, bring in new ideas and technologies through open innovation, or to achieve a faster time to market, among others. Whatever the reason, there are some standard ways to measure open source project success, including: The developers’ participation and level of influence in external open source projects The business-critical operation for which a specific open source project will be used How robust the open source project is, and if it has been successfully deployed in production environments The open networking ecosystem has evolved rapidly in the past few years driven by industry consortia and open source software components such as ONIE, Open Network Linux, multiple open networking operating systems, and various automation tools, many of which have been adapted from the server world. Broadcom has played a pioneering role in the development of the entire open networking ecosystem with innovative contributions in both hardware and software. In hardware, Broadcom has worked with multiple partners to contribute their networking designs based on multiple generations of our switching and routing merchant silicon into projects such as the Open Compute Project. In software, Broadcom has played a leading role in the depth and breadth of its contributions to the open networking ecosystem: Cloud, SDN and open projects driven by service providers. Through its partners and customers, Broadcom enables next-generation services through advanced instrumentation and telemetry. A few important contributions are below. SDKLT -- Open Source Logical Table Base Switch SDK Industry first open source Switch Software Development Kit based on an innovative Logical Table approach SDKLT is a powerful, feature-rich open source Software Development Kit that enables a new approach to switch</description>
      </item>
      <item>
         <title>Best Practices to Succeed in your AIOps Strategy - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/best-practices-to-succeed-in-your-aiops-strategy-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/best-practices-to-succeed-in-your-aiops-strategy-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>April 2, 2019</pubDate>
         <description>Six tips to help you roll out your analytics solution AIOps is more than just the latest buzzword - rather it is becoming an essential piece to successful digital transformation initiatives. But in order to succeed, you must have the proper strategy in place. In this blog, you will find some suggested best practices to help you succeed in your AIOps journey by mitigating challenges and simplifying adoption. These best practices focus on two key challenges for modern operations, the vast amount of data being collected and the criticality of being agile and proactive - they can be implemented in any analytics platform and have been proven to help our customers. 1: Tag your Data Metrics, logs, inventory, topology€¦ Terabytes of data will flow into your Data Lake so tagging your data is critical in order to get value out of it. Tagging will ease the browsing, searching and visualization of the data across your distributed analytics repository so be sure to always tag your data when being ingested into the platform. Any connector, API or event forwarding utility should facilitate this task. It is usually much more efficient to tag the data at the time of ingestion than to do it at a later stage when the data is at rest in the Data Lake. A good collection of tags can be: Domain (e.g.: netops, application, infra€¦) Geo (EMEA, US, APJ, country code) Owner Source product or application Team/Department Figure 1: Kibana search using tags 2: Secure your Data Always choose secure connectors to transfer data in/out of your Analytics platform. For instance, any log flowing into the solution must be sent over TLS (e.g. syslog) or https (API endpoints). Data should be secured not only in-transit but also at rest, methods like dm-crypt will help to encrypt your</description>
      </item>
      <item>
         <title>Strategic planning from the bottom up; one agile development team’s search for its purpose - Rally Software®</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/strategic-planning-from-the-bottom-up-one-agile-development-team-s-search-for-its-purpose-rally-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/strategic-planning-from-the-bottom-up-one-agile-development-team-s-search-for-its-purpose-rally-software</guid>
         <pubDate>February 14, 2018</pubDate>
         <description>Last month, I participated in my team's first strategic planning session. I'd been to strategic planning sessions before, but never one specific to an agile development team (I'm a agile software developer on a backend team in CA Agile Central.). I'd like to share some things I learned from the experience, and how this could be useful for all teams within an engineering department. According to Wikipedia, &quot;Strategic planning is an organization's process of defining its strategy, or direction, and making decisions on allocating its resources to pursue this strategy.&quot; Businesses have been doing this since the sixties, and apparently everybody who goes to business school learns how to run them. Our product owner, Dan Green, ran strategic planning sessions for development teams at a previous company, and believed our team could benefit from the process. Going into the meeting, our intention was to have a better sense of our team identity -- including where we'd come from -- and determine where we wanted to go and why. Finding Ourselves Our first session was a half-day. For a number of topics, we wrote on sticky notes how we thought our team related to the topic. One was Value: &quot;What value does our team bring to the organization?&quot; Our answers included that we &quot;fix customer defects&quot;, &quot;act as a resource for backend development&quot;, &quot;provide a fast, robust search service for customers&quot; etc. Next, our team did our own SWOT analysis. That is, we examined strengths, weaknesses, opportunities and threats for our team itself. We learned that we all respect each other's work ethics and abilities, but that we don't celebrate our accomplishments nearly enough. We agreed that we have opportunities to share technical knowledge across the organization and to improve the backend service upon which our front-end teams rely, but we</description>
      </item>
      <item>
         <title>Continuous Delivery Will Make or Break Your Product</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/continuous-delivery-will-make-or-break-your-product</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/continuous-delivery-will-make-or-break-your-product</guid>
         <pubDate>March 19, 2018</pubDate>
         <description>In today's market, speed wins. There are likely many companies in your market all trying to solve the same problems. You can have a better product than your competitor, but if you can't get it into the hands of your users it might not matter. Once someone starts using a product, it takes a good reason to motivate someone to switch. First to market matters. So what's a product manager to do? There are lots of ways to speed up your product delivery process. Lean Startup, Agile, and DevOps practices offer options to increase speed and decrease waste along the way. However, one of the most important practices is Continuous Delivery. Continuous Delivery is the ability to easily and quickly get product changes to your users. And Continuous Delivery will make or break your product. This is the first post in a series that explores Continuous Delivery. This series targets product managers, who often have a significant influence in product decisions such as investing in a new feature or Continuous Delivery. This post explores the why and what behind Continuous Delivery. Future posts will explore the journey of a single change , practices that help support continuous delivery and how to keep the continuous in continuous delivery. What is Continuous Delivery? At a high level, Continuous Delivery is the ability to easily and quickly release each change to your users. A change may be a new feature, an improvement to an existing feature, a bug fix, an experiment to validate a new idea. I work on CA Agile Central - a product that helps teams and teams of teams plan, execute, and iterate at scale. We continuously deliver each change when it's ready, about 20 times per day across 16 teams. Some of these changes are immediately available to our</description>
      </item>
      <item>
         <title>How to Integrate CA APM with Runscope API Monitoring</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/how-to-integrate-ca-apm-with-runscope-api-monitoring</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/how-to-integrate-ca-apm-with-runscope-api-monitoring</guid>
         <pubDate>April 3, 2018</pubDate>
         <description>By connecting CA Application Performance Management (APM) with Runscope API monitoring, you can get a complete picture of your application and can easily find the root cause of why an API is slow or failing. Watch the following video to learn how to set up the Runscope and CA APM integration, or follow the step-by-step instructions below: Connecting CA APM with Runscope Go to your CA APM SaaS tenant and log in. After that, copy the url (without the path) that we'll be using in the next steps. For example: https://954976.apm.cloud.ca.com In your Runscope account, click on your profile on the top-right and select Connected Services: Find the CA Technologies logo and click on Connect CA APM: Paste the CA APM URL that we copied in the first step in the text field and click on Enable APM Traces: And you're all set! Next, we'll look at how to start sending the API monitors information to our CA APM instance. How to View an APM Trace The first thing we need to do is enable our integration in our API monitor environment settings, to start sending information from Runscope to our CA APM instance: In your Runscope account, select a bucket and open an API monitor Select Editor on the left-hand side, and open the environment settings Open the Integrations tab in the environment settings and turn on the flag for the CA APM integration. Now that the integration is enabled, we can click on Save &amp; Run at the top. After the test is completed, open the result page under &quot;Recent Test Runs&quot;: The first link at the top next to Trace, &quot;View in CA APM&quot; will show the metrics map for the entire test. You can also expand each individual API request and select the &quot;Connection&quot; tab. You'll</description>
      </item>
      <item>
         <title>Leveraging Metrics: Cost Per Story Point</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/leveraging-metrics-cost-per-story-point-rally-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/leveraging-metrics-cost-per-story-point-rally-software</guid>
         <pubDate>July 5, 2018</pubDate>
         <description>Background Keeping up with metrics can be challenging. I was recently on site with a customer and as we ended our conversation she asked about calculating the cost of a story point. Promising to get back to her with my response I walked to my car and left the site contemplating her question. After some back and forth discussion with colleagues where we each shared our experiences and insights I put together my thoughts. When talking about cost per story point it is important to take note of certain events and it is important to understand the basic steps associated with the process. As a company adopts this metric as part of the organization's toolkit some very real and tangible benefits will surface. While it is important to realize how utilizing cost per story point can bolster your business, at the same time it is important to know that misusing this metric can prove to be detrimental. &quot;insert your name here&quot;. For stable teams, this can be a valuable metric! However, if team composition is constantly changing and/or the team hasn't yet achieved a stable velocity you must use caution. In such instances using the cost of a story point can be a very misleading figure and potentially quite harmful. &quot;insert your name here&quot;. It is not uncommon to see companies where management wants to use cost per story point as a standard measure across teams. Often, management will try to make all the teams use the same point measures. This invariable leads to anti-patterns which always prove to be harmful. Remember, like velocity, it's never valid to compare the cost per story points between teams. Every team is composed of unique individuals with their own set of abilities, work habits, strengths, and weaknesses. As a result, teams will invariably</description>
      </item>
      <item>
         <title>CA Converges Network Fault and Performance for its Monitoring Tools</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/ca-converges-network-fault-and-performance-for-its-monitoring-tools</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/ca-converges-network-fault-and-performance-for-its-monitoring-tools</guid>
         <pubDate>April 13, 2018</pubDate>
         <description>Integrated network monitoring tools enriches insight into the performance, availability, and event status of your network. CA Spectrum and CA Performance Management are two of the premier monitoring tools and the foundation of CA's Network Operations and Analytics portfolio. CA Performance Management's strengths of scaled collection and analysis of network performance data complement CA Spectrum's core proficiencies of detailed network modeling and monitoring, advanced fault detection, and root cause analytics. In the latest releases, the network fault and performance integration shares models, global collections, and events between the two systems. CA Spectrum contributes devices, interfaces, and groups to the CA Performance Management inventory, which CA Performance Management can monitor. CA Performance Management contributes infrastructure performance events to CA Spectrum, so you can see performance events and fault alarms side by side in OneClick. Figure 1: Seamlessly integrated network fault and performance data in a single network operations dashboard. CA's network fault and performance converged features include: Device Integration Life Cycle Status Interface Synchronization IP Domains Groups Multi-Tenancy Event Integration Alarm Integration Figure 2: The CA Spectrum and CA Performance Management monitoring tools integration architecture. Let’s dive into the feature sets A convergence of our monitoring tools provides an easy way in which an established CA Spectrum installation can drive the automated discovery of devices, interfaces and other components in CA Performance Management for an intuitive NetOps monitoring experience. A construct called an IP Domain is a key object type which allows CA Performance Management device discovery to be driven from CA Spectrum. IP Domains are created in CA Performance Management and automatically synchronized over to CA Spectrum appearing in the Global Collections sub-tree in the OneClick navigation panel. In CA Spectrum, IP Domains are an extension of global collections and can be populated with devices or interfaces in the same</description>
      </item>
      <item>
         <title>Upgrade Your Application Performance Monitoring Tool in 4 Easy Steps</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/upgrade-your-application-performance-monitoring-tool-in-4-easy-steps</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/upgrade-your-application-performance-monitoring-tool-in-4-easy-steps</guid>
         <pubDate>August 15, 2017</pubDate>
         <description>It's no secret that upgrading your existing application performance monitoring tool can be a timely an extensive process. But here at CA, we are commited to making this process as easy and smooth as possible by helping each step of the way. Recently I was asked to help a customer get up to speed with the latest version of CA Application Performance Management (APM) 10.5 and some of its new features and capabilities. As you could imagine, there is a bit of a learning curve customers coming from an older version. In this instance, one of the key customer requirements was to create dashboards quickly so they could give APM access to several of their app and support teams as part of their on-boarding strategy. They primarily had three categories (or roles) of users for their dashboards, as described below: Executive: &quot;How are my revenue impacting services doing and is end user experience good?&quot; Support: &quot;Are systems, apps and data centers that am responsible for healthy and if not what's the root cause and who do I triage to?&quot; App Dev: &quot;Are any of my recent changes to servlets, EJB's and DB's causing any negative impact and if yes how and what's the root cause?&quot; Apart from role based dashboards, another ask was the ability for a user to view his or her own data and nothing else. Not an unusual request as many of our customers have similar requirements, however traditional Application Performance Monitoring tools don't offer an easy solution. The Solution – CA APM 10 Enter CA APM 10 – it completely redefines the way dashboards are created and used. Among many features and functionalities - CA APM 10 introduced the concept of Perspectives, Attributes and Universes that together make dashboard creation a piece of cake. Before we</description>
      </item>
      <item>
         <title>Freedom for the Mainframe Developer, Bringing Git to CA Endevor</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/freedom-for-the-mainframe-developer-bringing-git-to-ca-endevor</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/freedom-for-the-mainframe-developer-bringing-git-to-ca-endevor</guid>
         <pubDate>November 1, 2018</pubDate>
         <description>As large enterprises integrate the mainframe into their digital transformation plans, the question of how to transition to the new without disrupting what exists is a nagging dilemma. This question is critical especially when thinking about how to manage mainframe source code and its lifecycle without disruption. Check out our short video. What is Working Mainframe source code is traditionally managed by Source Code Management (SCM) tools like CA Endevor® Software Change Manager which provide Enterprise mainframe version control, build and lifecycle management of the source. SCM tools ensure effective controls and management of quality and reliability as the code is developed and pushed through test and production. The Challenge Transforming from what works on the mainframe to leveraging modern software development tooling like Git without disruption-this is the holy grail. There are disruptive approaches to starting over, each of which carries risks to the existing code lifecycle and auditability expectations. Why not then keep what is there and add Git integration? Let those that want to leverage Git's experience and merge capabilities use Git and those that want to stick with ISPF continue without disruption. Teams For teams in enterprises, source code changes by multiple developers is inevitable. Integrating enterprise Git solutions like Atlassian Bitbucket best allows for the kind of parallel development teams are used to. A developer can clone his repository, create the needed branches, and ready his workspace for changes quickly and easily. The developer can now operate on the mainframe source as he would any other language and save it to Git. From Git, he submits it to Bitbucket, where the team can resolve any change conflicts. From there, the developer or team merges the code into CA Endevor SCM, which completes the build. With a successful build the remainder of the CI/CD procedures continue.</description>
      </item>
      <item>
         <title>Artificial Intelligence based Pattern Recognition with CA APM</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/artificial-intelligence-based-pattern-recognition-with-ca-apm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/artificial-intelligence-based-pattern-recognition-with-ca-apm</guid>
         <pubDate>July 10, 2018</pubDate>
         <description>Using Neural Networks for Proactive Triaging The power of machine learning appraises its full potential with the combination of rich, relevant and reliable data. In the domain of application performance monitoring, its rather imperative to have a rich collection of data, however, it requires a combination of domain expertise, statistical learning, robust underlying mathematical models and machine learning models to build efficient capabilities that leverage Artificial Intelligence. In this blog, we talk about how we use neural networks to provide CA Application Performance Management (APM) the ability to learn and recognize complex patterns formed by multiple metrics and inform the users in advance about critical situations and requirement to take actions. The beauty of the solution lies in the human-interpretable cognition of a situation, formed by accounting for different aspects of the system hollistically instead of single metric analysis that has been implemented in the past. CA APM customers can access the status of multiple metrics that are reported to the Enterprise Manager by the APM agents. These metrics are typically collected every 15 seconds and report different aspects of the application performance. When any of these metrics behave abnormally, CA APM raises an alert/alarm. Typically, APM solutions capture anomalous behavior in two ways: 1. Static Threshold Based Alarms: When either of the metric values goes past a preset user configurable threshold, an alarm is raised. 2. Univariate Statistical Analysis: Based on historical behavior of each metric, an alarm is raised whenever the metric value goes past the nth percentile value. However, APM users are required to set thresholds for different metrics in order to identify the anomalous situation and generate alerts. This leads to work from the end users’ side and creates unwanted alarms, less educated thresholds and overall degradation of the user experience. In addition, these alarms/alerts may</description>
      </item>
      <item>
         <title>DevOps Tool Tyranny</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/devops-tool-tyranny</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/devops-tool-tyranny</guid>
         <pubDate>October 10, 2018</pubDate>
         <description>In the software development world, we hear the adage, 'use the right tool for the job' all the time. Its use goes back decades, and we've all been told, 'you don't hammer a nail with...' For me, deciding on the tool is often the most important step in the process (as significant as how you use it) because the implications are long-term and can be expensive to undo if you make the wrong choice. When it comes to programming languages, different languages are better suited to specific use cases than others. In other instances, the decision is less clear cut. For example, today, if I were to develop a multi-threaded application I would select Go or perhaps even Node.js in a Kubernetes cluster. I would not choose Java for such a project. No doubt some reading this may disagree with my example, and that illustrates my point. It can be difficult to determine which language is the best for a particular project; there are lots of factors that must be weighed and considered. Early in my career, I learned the benefits of putting the greater good of the whole project above the benefits of any specific language. I asked questions like, 'Does anyone on my team know the language?' 'Is the language one with staying power or is it simply in vogue and destined to go out of fashion?' 'Is learning this language a pet project for the recommending developer and will they leave after they are done and bored?' 'Do I have the talent to maintain this project or will I be caught in a continual rewrite cycle?' While I learned that standardizing languages provides stability and, perhaps ironically, nimbleness, the 2017 State of DevOps report suggests the opposite when it comes to CI/CD and DevOps tools. It highlights</description>
      </item>
      <item>
         <title>7 Reasons to Automate SAP System Copy</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/7-reasons-to-automate-sap-system-copy</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/7-reasons-to-automate-sap-system-copy</guid>
         <pubDate>June 14, 2018</pubDate>
         <description>Reduce person-days spent on this frustrating task to free funds for digital innovation. OK, so SAP system copy isn't the sexiest subject. It's not a buzzword that illuminates tech conferences and you won't find it high on many peoples' agendas. Yet although we read endless articles about digital transformation in the cloud, the reality is that most companies still rely heavily on their data centers and legacy systems. But CIOs under constant pressure to keep legacy on-premises infrastructure and software, will see saving time and money on SAP system copies as big win. Indeed, companies typically invest large amounts of money and resources into SAP-after all, reliable German engineering does not come cheap (just ask Mercedes drivers). SAP experts are rare and expensive human resources. Minimizing costs in repetitive, time-consuming processes leverages such investments and frees up resources for digital initiatives that excite the C-suite. Many person-days are wasted unnecessarily on manual efforts. Often, we need to update SAP programs for new business and test against real-life data or test the functionalities of a new SAP release. This is traditionally a nightmare for IT departments but can be avoided with automation. Rehau, for instance, went from spending 35 person-days per year on system copies down to just one. Here are 7 good reasons to automate SAP system copy: Boring, Repetitive Tasks SAP admins usually hate this kind of task because it involves a cumbersome list of individual tasks and complex connections between SAP systems. Then there is the list of pre- and post-processing tasks required, which is just as extensive. Automation solutions provide a comprehensive template of every step needed for a system copy, which means that steps are not missed as they can be with a manual process. Human Error This naturally means there is a large margin for</description>
      </item>
      <item>
         <title>Mainframe DevOps Foundation - Modernize or Replace?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/mainframe-devops-foundation-modernize-or-replace</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/mainframe-devops-foundation-modernize-or-replace</guid>
         <pubDate>April 4, 2019</pubDate>
         <description>Understanding the difference between Legacy and Heritage will have impact on business decisions! The past decade has witnessed the Mainframe application development space evolve to embrace Agile, Lean and DevOps methodologies to prove that those are not just buzzwords and moreover not just for born-on-the-web companies or Unicorns. The key part of that evolution is to understand that the transformation from the age-old practice of &quot;water fall&quot; to lean, agile methodology is a journey aimed at continuous improvement while taking into account the day-to-day realities of cost optimization and finding ways to do more with less -- in other words, to optimize business efficiency. With that context, as a techy, it's frustrating when I hear statements like &quot;as an organization, you need to replace your tools because they are legacy.&quot; I feel the focus should be on evaluating whether the tool is evolving to keep up with the market and technological trends or maintaining status-quo. Before going further, understanding the difference between Legacy and Heritage is important. Legacy is something that you leave behind. It keeps you in the past. Heritage is something you inherit to build for the future. Not understanding the difference between the two can lead to wrong business decisions. The reality of the mainframe space is that many tools, including the mainframe platform itself, are branded &quot;legacy&quot; purely because they existed for a very long time. The reality is that the mainframe as a platform has been evolving and continues to re-invent itself in all aspects including hardware, software, and pricing. Following the cue, mainframe tools and applications (e.g REST enabling a legacy COBOL CICS transaction) have been evolving as well. By not maintaining the status-quo, the mainframe ecosystem is looking to the future. Organizations are embracing mainframe as the heritage and are experiencing growth in</description>
      </item>
      <item>
         <title>How Machine Learning and Data Lead to Predictive Intelligent Automation</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/how-machine-learning-and-data-lead-to-predictive-intelligent-automation</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/how-machine-learning-and-data-lead-to-predictive-intelligent-automation</guid>
         <pubDate>August 11, 2017</pubDate>
         <description>


Staying ahead of industry trends and innovations in the market is key to making sure we're developing products that address our customers' most pressing needs. To give you the latest insights from our leadership, our GMs will be sharing their thoughts on what they're seeing and how we can stay ahead.

Data is the new currency.

Access to massive data (both structured and unstructured) and immense computing power for analysis has increased the opportunity for machine learning at a pace that was never possible before. Now, technology like self-driving cars, robotic assistants, and the internet of things are part of everyone's everyday life. And faster machine learning, analytics and automation are setting the stage for even more advancements. What are the most interesting trends you are following and excited about when it comes to machine learning? Continue reading ...
</description>
      </item>
      <item>
         <title>Stay Calm and Let Network Monitoring Remove the Holiday Stress</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/stay-calm-and-let-network-monitoring-remove-the-holiday-stress</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/stay-calm-and-let-network-monitoring-remove-the-holiday-stress</guid>
         <pubDate>December 3, 2018</pubDate>
         <description>The top use cases to keep your network operations &quot;stress-free&quot; during the holidays The holidays can be stressful. Whether it's the in-laws, traveling, or that uncle who has no filter when he tells eyebrow raising stories after a couple glasses of wine, we all have enough on our plates during this time of year. CA understands that while we may not be able to help you with that &quot;uncle problem&quot;, you shouldn't have to stress out about how your network is performing during the busy holiday season when you have a reliable network monitoring platform doing most of the work for you. Stay Calm and let CA's network monitoring help: 1. Say goodbye to swivel chair monitoring. Because of rapid advances in networking technology coupled with user demands, network managers often find themselves with way too many network monitoring tools, each designed to manage or monitor a single aspect of the enterprise network and application performance. Effective network management today starts with one view: a convergence of network operations that enables network managers to perform comprehensive and scalable monitoring and analytics and should always include these four critical factors - one NetOps portal, one OpenAPI, one data collector and one context. 2. Chill out and reduce the noise. Modern network architectures like Cisco ACI have tens of thousands of events defined with hundreds of unique messages and alarms. This many events and faults can flood your network and an operations team's ability to troubleshoot efficiently. CA's award winning, #1 ranked Network Operations Analytics platform reduces the noise by suppressing non-critical alarms to allow the NOC to focus on the real root cause of any outage for faster triage. 3. Go vintage and still easily embrace the &quot;new&quot;. SDN technologies may be a little more mainstream than they were years ago</description>
      </item>
      <item>
         <title>The Benefits of SaaS Application Performance Monitoring</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/the-benefits-of-saas-application-performance-monitoring</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/the-benefits-of-saas-application-performance-monitoring</guid>
         <pubDate>October 4, 2017</pubDate>
         <description>Top 5 reasons to try APM SaaS today While you may know CA Technologies as an innovator in the world of Application Performance Monitoring and Management (APM) for more than 15 years, you may not know that we also offer a SaaS APM option. Our SaaS Application Performance Management is available along with App Experience Analytics and infrastructure management within our SaaS-based digital experience monitoring and analytics solution, Digital Experience Insights. Here are the top five reasons you should check out APM SaaS today: 1. APM SaaS includes a demo app so you can check out APM in action even before deploying any monitoring to your app. One of the most common requests we hear is requesting a test drive of APM without impacting organizational systems. So what could be better than using &quot;Walk Me&quot; guides to see APM auto-detect an app problem, organize the evidence into Analysis Notebook and direct you to the suspected Culprit root cause from a demo application. There is nothing to deploy in your environment or your application until you are ready. Figure 1: Experience View displays monitoring for a Demo Application (comprised of a Trading Service and Reporting Service) so users can navigate and triage an app issue before even deploying any monitoring into your own application. 2. Smart agents make it easy to start monitoring your own application. When you are ready to begin monitoring your application, smart agents make it easy to get started. Simply follow the 1-2-3 steps to choose Unix/Linux/Windows OS, select your agent for download (such as Tomcat, JBoss, Node.js, PHP or many others), then follow the steps for where to extract the agent archive and enable it within your application. There is no collector to install; simply enable an agent for your app. Once the agent connects into</description>
      </item>
      <item>
         <title>The Relationship Between Robotic Process Automation and Workload Automation</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/the-relationship-between-robotic-process-automation-and-workload-automation</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/the-relationship-between-robotic-process-automation-and-workload-automation</guid>
         <pubDate>December 18, 2017</pubDate>
         <description>How does the advent of RPA fit within the broader automation landscape? Robotic process automation (RPA), a concept that has emerged over recent years, is still in a state of rapid evolution, existing without a clearly defined end-state or direction. As such, vendors are experimenting and pushing their products into uncharted waters – successfully or otherwise. Nonetheless, we can be sure that artificial intelligence and machine learning will continue to develop and impact on automation solutions as whole, even if at the moment these capabilities do not frequently exist within the RPA space. Indeed, it is worth noting the ‘robotic’ tag is something of a misnomer and refers primarily to the tool’s ability to carry out repeatable executable tasks, not a form of AI. RPA is in fact most commonly used to emulate keystrokes; it runs through the application interface and its processes are defined using demonstrable steps – one of its selling points is that these rules do not require code and can be taught by a non-technical end-user. To put it simply, RPA manipulates existing software applications by imitating human behavior through rule-based tasks. Gartner elucidates this point by describing RPA as a &quot;virtual worker,&quot; suggesting it is most suitable in situations where organizations wish to assist or even replace manual workers. Strengths and Weaknesses of Robotic Process Automation The main benefits of implementing an RPA solution are a significant cost reduction and the faster speed with which manual activities can be completed. According to the Institute for Robotic Process Automation, these benefits can be realized by, &quot;Any company that uses labor on a large scale for general knowledge process work, where people are performing high-volume, highly transactional process functions. Clearly RPA is best suited to specific high-volume workloads that require no decision-making process. However, as a single</description>
      </item>
      <item>
         <title>Mainframe Is Staying (And The Rise Of DevSecOps)</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/mainframe-is-staying-and-the-rise-of-devsecops</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/mainframe-is-staying-and-the-rise-of-devsecops</guid>
         <pubDate>February 7, 2018</pubDate>
         <description>CA Technologies releases its top technology prediction for 2018. CA Technologies Vice President, ASEAN and Greater China, Nick Lim shares that organisations must put software at the core of the business in order to thrive in the digital economy. Lim emphasises that enterprises can increase their success by focusing on DevSecOps, Software as a Service (SaaS) and mainframe innovations.

Read more &gt;
</description>
      </item>
      <item>
         <title>Modernizing to Microservices</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/modernizing-to-microservices</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/modernizing-to-microservices</guid>
         <pubDate>May 31, 2018</pubDate>
         <description>Upgrade legacy environments with automation. Though microservices have been around for a while, they have recently gained popularity for their promise to replace the monolithic approach to IT. But what are they exactly? Essentially, microservices are individual web-based applications that serve specific functions and are relatively easy to mix and match to meet different needs. For this reason, they are becoming increasingly appealing as the basis for a software architecture in organizations that want to modernize their enterprise IT systems and enjoy the benefits of utilizing SaaS and the cloud. Building Blocks for Agility Small, lightweight and flexible, microservices are well adapted to agility: when strategies change, they can be modified in relative isolation-without impacting all aspects of the system and risking outages and downtime. Microservices are frequently paired with container technologies, such as Docker and Kubernetes, in order to exploit their natural portability and isolation as ways to package and quickly deploy. This modularity provides an alternative to complex, monolithic applications, where all modifications and updates must first be slowly and carefully checked for risks to the entire system. The structural risk isolation and ability to rapidly deploy (and revert) changes that microservices architectures enable fosters a better partnership between developers and operations teams. Flexibility Brings Challenges However, with great flexibility comes a higher risk of something slipping through the cracks. Putting so many different services together can be a bit like building a puzzle: the dependencies and overall structure can be difficult to manage without a clear point of control and a plan for what the finished product looks like. Process scalability is another challenge. An enterprise can run one cloud-based service with relative ease, but simultaneously running dozens of services across multiple clouds quickly becomes a complicated juggling act. In fact, with all of the independent &quot;moving</description>
      </item>
      <item>
         <title>Learn How to Drive Value and Maximize ROI with APM</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/learn-how-to-drive-value-and-maximize-roi-with-apm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/learn-how-to-drive-value-and-maximize-roi-with-apm</guid>
         <pubDate>October 23, 2017</pubDate>
         <description>A New Model for Application Monitoring Over the past few months I've been working with a number of customers educating and training them on some of the new capabilities introduced in CA Application Performance Management 10.5. CA APM 10.5 is a complete makeover and paradigm shift on how we approach application monitoring. It introduced new features and concepts like Attributes, Experience View, Universes, Analysis Notebook, Assisted Triage etc. These new capabilities provide enormous flexibility by allowing users to slice and dice the data in ways unseen before. Attributes and Perspectives allow users to organize views with just a few clicks similar to that of an excel pivot table. Experience Cards provide a view into your customers experience and Analysis Notebook and Assisted Triage provide any easy troubleshoot and root cause mechanism. There are a lot of new concepts introduced in APM 10.5 so before we dive in and explore best practices to operationalize an APM environment, let us understand at a high level the reason behind the design choices and the new concepts. User Experience, Data Manageability and Relevancy At the heart of our CA APM 10.5 design was a burning desire to answer a very important question, &quot;how is my end user experience?&quot; – is it good? And if not, what is the root cause of the issue? Experience View or Experience Cards, provides a window into your end user experience. It's a card view that lays out key metrics related to end user business transactions and allows users to drill down into the Analysis Notebook, which is like a single pane of glass with all the relevant information presented in one screen. For example, it provides an in-context business transaction flow map set to the appropriate time window of the problem, shows all the key blame point metrics,</description>
      </item>
      <item>
         <title>Why Application Performance Management Must &quot;Shift Left&quot;</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/why-application-performance-management-must-shift-left</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/why-application-performance-management-must-shift-left</guid>
         <pubDate>August 9, 2017</pubDate>
         <description>Deliver Quality Software at Lightning Speed by Bringing Application Performance Management and Jenkins Together Over the last couple of years we've been seeing a trend in the industry where application performance management (APM) is &quot;shifting left&quot; from production to pre-production and the application development space. The thinking behind that &quot;Shift Left&quot; strategy is that the same visibility and deep insight that APM provides in the production environment can be leveraged by the developer to deliver a quality build. Old is NOT Gold Before we jump in to see how, let's look at what we have been doing traditionally' for the better part of the last few decades we worked in a waterfall fashion where the demarcations and responsibilities of the teams are well defined. Teams often worked in silos and things are thrown &quot;over the wall&quot; so to speak to become someone else's responsibility. There are obviously inherent delays in the system and more opportunities for defects, bugs, outages, etc. Now this has a huge impact not only on the quality of a release, but also considerably slows down the entire delivery pipeline. In fact, there are numerous studies that state users expect websites to load faster and have become less forgiving and more demanding. The old way of doing things is no longer good enough. Studies show that those who have adopted DevOps found that increased collaboration among teams, closer cooperation and shared responsibilities resulted in over a 66% increase in quality of the deliverables. So organizations that have embraced DevOps seem to be having a huge benefit. Application Performance Management and DevOps The concept of DevOps was born from the fact that the two key players, Dev and Ops teams, in delivery software have some competing motivations and still have to come together to ensure a quality release.</description>
      </item>
      <item>
         <title>How close is your organization to achieving continuous delivery?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/how-close-is-your-organization-to-achieving-continuous-delivery</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/how-close-is-your-organization-to-achieving-continuous-delivery</guid>
         <pubDate>April 5, 2018</pubDate>
         <description>The continuous delivery pipeline can be visualized as a factory. What do factories require? DevOps Orchestration. DevOps orchestration ensures an organization’s development, physical environments and processes are capable of delivering new builds into production as rapidly as possible. The continuous delivery pipeline can be pictured as a factory and, like a factory, there is a certain level of specialization required for the various tasks that must be accomplished. However, while individual tasks or steps can be automated, the factory’s end-to-end process – its output – is obviously the most important from a business perspective. Similarly, in DevOps, we have many specialized lifecycle tasks that are distinct from one another, but the most important measurement is the end release. Firstly, there’s design and development; next there’s testing (across multiple levels); then comes production monitoring and round she goes. While the stages are generally the same across teams and organizations, the specific requirements and preferences of said teams/organizations lead to a market place filled with thousands of tools for accomplishing generally similar tasks. DevOps Orchestration: Beyond Standardization In the days predating agile, the principle of standardizing was popularized by CIOs and vendors. The idea that a company can greatly benefit from standardizing tools and practices across all its infrastructure and business applications made a lot of sense in an age when agility wasn’t the most critical competitive advantage. Nowadays it’s completely different. As organizations compete in an ever-evolving and ever-improving world of customer experience, standardization has subsided. In its place, the enablement of teams has risen, granting the freedom to use any tools they deem fit for purpose. Results matter. With an ever-growing set of technologies and tools, the challenge of automating an end-to-end process within a software factory becomes ever more critical. While enabling the separate teams to use their preferred</description>
      </item>
      <item>
         <title>A Radically New Architecture That Changes What You Know About APM</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/a-radically-new-architecture-that-changes-what-you-know-about-apm-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/a-radically-new-architecture-that-changes-what-you-know-about-apm-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>July 24, 2019</pubDate>
         <description>Traditional APM Application performance management solutions have been around for almost 20 years, solving some of the most complicated and mission-critical performance problems that have impacted end-users and the business. However, APM deployment architectures have remained fairly constant, consisting of a collection agent, server, and clustering capabilities for larger environments. Even today, with the introduction of newer observability methods, one could argue the deployment architectures are still relatively the same. Certainly, these monolithic deployment architectures have its fair share of challenges from pre-determined static sizing requirements, inability to provision new environments on a user by user basis, limitations with dynamic scaling, and other maintenance woes. As the founders and leaders of the APM industry we asked ourselves, there has to be a better way. Digital transformation initiatives and the evolution of APM Today many of our customers are down the path of digitally transforming their businesses to remain competitive and tackle new initiatives to attract new audiences further. As a result, they have refactored their applications to adopt new application architectures and technologies, aligned organizations using data and business KPIs as a primary measurement and have evolved their development and operational processes to accelerate innovation. New technologies like microservices, Kubernetes, Containers, and distributed cloud deployments have introduced new observability challenges that shake the foundations of any APM solution. Now, customers are looking for solutions that not only collect performance data but provides the intelligence and automation to provide the insights they need to support their digital initiatives. Several years ago, we recognized this shift and started to take steps towards innovating our APM solution to not only support the latest modern application architectures but evolve it to include a powerful, open, scalable, and intelligent AIOps solution. This new solution allows us to correlate and analyze data across users, applications, infrastructure,</description>
      </item>
      <item>
         <title>Don't Let Service Incident Remediation Devour Your Service Desk Time and Resources</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/don-t-let-service-incident-remediation-devour-your-service-desk-time-and-resources</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/don-t-let-service-incident-remediation-devour-your-service-desk-time-and-resources</guid>
         <pubDate>September 30, 2018</pubDate>
         <description>How does intelligent automation relieve the pressure on your IT service desk? It's an all-too-familiar problem: Your IT service desk is faced with supporting more applications, services and systems, while end-user expectations are rising. That complexity is cascading into an ever-expanding service desk workload and increased cost. As it stands, simple and recurrent incidents are logged into your service desk system either manually or via monitoring tools, monitored by experienced staff and quickly categorized by type, urgency and/or importance. Approvals are then required, subject matter experts spend time fixing the issue and the service desk tool is updated with descriptive comments. Almost everything is wrong with this approach. These time-consuming, repetitive tasks are all performed manually. Successful resolution relies on the skills of scarce, expensive staff, and even then, being human means mistakes can occur. Moreover, incident qualification and remediation absorbs significant time, owing to the reliance on manual, non-automated processes. How are organizations responding to this complexity? According to research by MetricNet, 36 percent of IT service desks plan to increase their staffing levels in 2018, while the average cost per ticket is $15.56 and the cost per minute of handle time is $1.60. Another study reveals that 91 percent of IT service desks plan to offer more self-servicing options in the future. But almost everything is wrong with this approach as well! Adding more IT staff to the remediation process is inefficient and costly, given manual effort is still required. Self-service is the obvious solution, but many such tools lack the requisite orchestration technology to close the gaps between silos of automation. Service Incident Remediation for Forward-Thinking Organizations Intelligent automation is the answer; service incident remediation automatically resolves your simple, repetitive service requests and incidents, freeing your valuable staff from the time-consuming tasks. It minimizes the opportunity for</description>
      </item>
      <item>
         <title>Is Digital Business Automation important to me?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/is-digital-business-automation-important-to-me</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/is-digital-business-automation-important-to-me</guid>
         <pubDate>July 28, 2019</pubDate>
         <description>First of all, let us acknowledge the most significant evolutionary tide that has hit enterprises in a generation – digital transformation. Without fail, there is digital transformation initiatives underway within your organization, and the chances are that you are directly participating in one You are not alone in embracing transformative initiatives. If you look at numbers from Gartner and IDC, the question quickly becomes ‘am I doing enough?’ A Gartner finding suggests that by the year, 2020, 55% of organizations will be digitally determined. This is not a folly because IDC independently predicts that enterprises will spend $5.9 Trillion on DT initiatives over 4 years. But, even with all this spend and effort, 2/3 of business leaders believe that their companies must pick up the pace – they are too slow. So, the short answer to is Digital Business Automation important to me is YES. Motivation for Digital Transformation Reasons for transformation are unique from company to company. But there are common themes that run across them, transformation aligns to three major business categories: Unlocking new opportunities. In the old economy, delivery avenues, channels, and methods were reasonably straight forward and understood. The new digital economy provides greater fluidity for delivering products or services to your customers. Creativity and out-of-the-box thinking can unlock new revenue opportunities. Deliver exception experiences. With competition just a click away, it is critical that you provide your customers with exceptional and engaging experiences. Increasing productivity. Digital transformation increases the scale of work and pervasive automation drives increased productivity with the same resources across a complex enterprise ecosystem. Agility, especially enterprise agility is a common outcome for all transformation initiatives. Users and customers are living in the ‘now economy,’ meaning they not only want their product or service immediately, but they want a friendly, knowledgeable, and</description>
      </item>
      <item>
         <title>Machine Learning in IT Operations - How do I benefit?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/machine-learning-in-it-operations-how-do-i-benefit</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/machine-learning-in-it-operations-how-do-i-benefit</guid>
         <pubDate>April 17, 2017</pubDate>
         <description>Can predictive analytics help your IT operations team today without engaging a data scientist? In a word: yes. New solutions such as CA Mainframe Operations Intelligence plug into your existing management and monitoring platforms and put the game-changing advantages of predictive analytics at your fingertips. In this blog, I'll introduce the four biggest benefits of IT operational analytics solutions. Predict earlier With reliable performance fundamental to your business meeting SLAs, maximizing uptime and minimizing downtime are critically important. Embedded analytics that use machine learning algorithms can predict anomalies faster and more accurately than traditional monitoring tools. At the same time, dynamic thresholds that adapt to mainframe behavior patterns help eliminate the false positives that cause important alerts to get missed. This is a big help in mainframe environments where automation reduces the risk of inexperienced staff making errors. Remediate faster Correcting performance issues often involves manual triaging using multiple tools to determine a root cause. CA Mainframe Operations Intelligence offers a single authoritative source of performance analytics, and presents it in a single interface. This makes it easier for your teams to correlate data, identify patterns and pinpoint anomalies to handle problems decisively or route them to the right experts. Collaborate more efficiently Because it presents data in a single environment, CA Mainframe Operations Intelligence gives everyone a common picture of mainframe health that they can share and work from. This, coupled with the use of historic data to understand and address persistent issues, allows teams to collaborate more efficiently for faster fault isolation. Continuously improve performance In teams where skills and resources are often over-stretched, machine learning and predictive analytics actively help teams deliver better service. They can help you make the shift from a reactive to a proactive way of working, with the ability to dynamically adjust to</description>
      </item>
      <item>
         <title>Demonstrate Data Compliance!</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/demonstrate-data-compliance</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/demonstrate-data-compliance</guid>
         <pubDate>August 10, 2017</pubDate>
         <description>Enterprise data are subject to various regulations depending on their geographical location and type of business. An increased effort is expected and mandated to respect those rules, typically meant to better secure and protect the accuracy and privacy of enterprise data. In various regulations, it is also expected to actually demonstrate data compliance, which is not a piece of cake. In addition, most people think that external threats (such as an external hacker trying to access corporate data) are the most common data security issues. In reality, various studies have shown that internal threats comprise 80% of all security threats. In other words, companies should make sure to protect their corporate data against their own employees. Examples of regulations Sarbanes-Oxley Act (SOX) : The goal of SOX is to regulate corporations in order to reduce fraud and conflicts of interest, to improve disclosure and financial reporting, and to strengthen confidence in public accounting. Specifically, the section 404 of this act, the one giving IT shops fits, specifies that the CFO must do more than simply vow that the company's finances are accurate; he or she must guarantee the processes used to add up the numbers. Those processes are typically computer programs that access data in a database, and DBAs create and manage that data as well as many of those processes. Health Insurance Portability and Accountability Act (HIPAA) : This legislation contains language specifying that health care providers must protect individual's health care information even going so far as to state that the provider must be able to document everyone who even so much as looked at their information. Aka. can a DBA produce a list of everyone who looked at a specific row or set of rows in any database? General Data Protection Regulation (GDPR) : This new regulation</description>
      </item>
      <item>
         <title>Assessing the Future of SAP</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/assessing-the-future-of-sap</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/assessing-the-future-of-sap</guid>
         <pubDate>September 4, 2018</pubDate>
         <description>How can you bring agility to systems that weren't designed for it? History is littered with examples of fortune tellers and mystics, from Rasputin to Nostradamus. Generally, we regard them as fraudsters or perhaps misguided pseudo-scientists who caused more trouble than good. But what history has also shown us is that experts themselves can get it badly wrong. And this is nowhere more prevalent than in the case of technology: Cars, cinemas and even the PC have all had their significance and longevity significantly doubted by people ‘in the know’. As recently as 2007, Microsoft CEO Steve Ballmer stated, “There’s no chance that the iPhone is going to get any significant market share.” And while we can look back and laugh now at these failed predictions, they were born out of periods of great uncertainty and change. And this is true in the digital age as much as ever before. It’s not always clear what people will take to, what new technical innovation can survive and what is around the corner to shake things up yet again. Clearly, therefore, what you need is an IT infrastructure that can withstand this unpredictability, and SAP is no exception. What was built for a different era must now brace for the fourth industrial revolution, because changes in technology mean changes in consumer expectation. Customers expect to interact with us via apps, whether on the web, from their mobile or in the form of social media networks. There are more connected devices than ever, and this is only increasing exponentially, so to thrive in the digital age, you need your apps to be faster and more responsive than those of your competitors. So how do you bring speed and agility to an older system not designed to keep pace with digitalization—especially when simply abandoning</description>
      </item>
      <item>
         <title>CA in Challengers Quadrant of 2018 Gartner MQ for NPMD</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/ca-in-challengers-quadrant-of-2018-gartner-mq-for-npmd</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/ca-in-challengers-quadrant-of-2018-gartner-mq-for-npmd</guid>
         <pubDate>February 26, 2018</pubDate>
         <description>CA continues to deliver a comprehensive network monitoring and analytics platform. Recently Gartner published the 2018 Gartner Magic Quadrant for Network Performance Monitoring and Diagnostics (NPMD), rating CA's network monitoring software in the challengers quadrant. The Network Operations and Analytics platform, which is comprised of CA Spectrum for fault and event management, CA Performance Management for infrastructure performance monitoring, CA Network Flow Analysis for flow monitoring, CA Application Delivery Analysis for packet monitoring, CA Virtual Network Assurance for SDN/NFV monitoring; all converged into a one network operations dashboard experience was rated favorably and has improved in completeness of vision, leading the challengers and moving closer to the leaders quadrant. Gartner recognizes CA's ability to commit and deliver requested enhancements as above average, sees a marked increase in user satisfaction and an improvement in interoperability with APIs, allowing the ingestion of third-party data and the ability to incorporate third-party analytical engines outside of the CA suite of products. We believe this is a great testament to our continued commitment and investment to the network monitoring portfolio. Delivering a converged, full-stack NetOps monitoring experience CA has moved quickly to address dynamic market changes and challenges IT faces when monitoring modern networks along with traditional infrastructure. The analyst community expects a converged network monitoring experience for the entire stack that delivers faster root cause analysis, streamlined workflows and an improved customer experience. CA continues to evolve its network tools to meet these expectations. Figure 1: Fault and performance network monitoring dashboards for traditional and SDN environments For Network Engineers and Architects responsible for network transformations, who need to assure uptime for thousands of applications and deliver network services in a matter of minutes; the importance of a converged, full stack, network monitoring experience for both traditional and new modern networks is more important</description>
      </item>
      <item>
         <title>CA APM .Net Agent Now Available on Azure Marketplace</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/ca-apm-net-agent-now-available-on-azure-marketplace</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/ca-apm-net-agent-now-available-on-azure-marketplace</guid>
         <pubDate>September 6, 2017</pubDate>
         <description>View Azure performance metrics in CA Application Performance Management in a single dashboard In English, Azure is used to describe anything blue like a cloudless sky, interesting word for a vendor, which is arguably growing twice as fast as even AWS. Although AWS does have a huge market share, it is no surprise however to find enterprises deploying applications in Microsoft Azure. The .NET Agent for Microsoft Azure App Services allows enterprises running .NET applications in Microsoft Azure to identify and resolve performance issues. The .NET Agent for Microsoft Azure App Services integrates performance metrics into CA Application Performance Management (CA APM) for intelligent analytics, alerting, and visibility on a single dashboard. The Azure Site Extension makes enabling monitoring of Azure applications really easy with a minimum number of clicks. Here are the steps to install and configure the CA APM .NET Agent for Microsoft Azure App Services: Installation Pre-requisites: If you don't already have an APM Instance, you can sign-up for a free 30-day trial of CA Application Performance Management, available through CA Digital Experience Insights. To add the CA APM .NET extension to your App service: Navigate to your App service in the Microsoft Azure portal Select the Extension from the left menu tab Click + add, select the CA APM .NET Agent for Azure App Services from the list of available extensions Click ok to accept license terms and click ok to install the extension Configuration Once installed, the extension must be configured with the APM Server details. To do so, navigate to the App Settings of the App service to configure the Site Extensions. Once configured, the site extension helps you manage the performance of applications deployed in Azure right from the End User experience. With 100% transaction visibility, you get insights into the end user</description>
      </item>
      <item>
         <title>JVM Performance Parameters in AWS Lambda - CA APM Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/jvm-performance-parameters-in-aws-lambda-ca-apm-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/jvm-performance-parameters-in-aws-lambda-ca-apm-blog</guid>
         <pubDate>December 18, 2017</pubDate>
         <description>Despite being a long-time Java programmer, I had presumed that server-less &quot;function-as-a-service&quot; services like AWS Lambda would be the domain of more functional languages like NodeJS. Concerns like the latency of cold starts and the subsequent cost/benefit ratios of costlier just-in-time compilation optimizations seem to conceptually favor runtimes designed to work quickly on single invocations. And what runtime could be better designed for run-once than one designed for a web browser? A recent article on InfoQ challenges this notion. The fundamental basis for this is that an AWS Lambda is not actually run-once, as the name might imply. Lambda services are still written as HTTP servers; they are simply lazy-launched and have a limited lifecycle not under the user's control. Processes aren't launched until a request arrives, but they aren't destroyed after a single invocation. Rather, AWS decides how long to keep a given process running and decides when to kill it. While the full details of the presentation aren't yet online, John Chapin of Symphonia empirically tested a number of different scenarios with a benchmarking tool of his own design. Some of the results validated well-known behavior, where the CPU performance of an instance is controlled by the memory allocation requested. Some information is tantalizing, where Chapin collected data, over a 2 day run, of how frequently instances are restarted. It seems that on a per-invocation basis, 128MB instances were restarted 1.8% of the time, whereas 1.5GB instances were restarted 0.97%. The way that this type of probability should be interpreted would depend on the actual methodology of this test, so there's definitely more information pending. Once you have applications running in Java, the next logical question to consider is what kind of code will be performant in this environment. Performance monitoring tools are obviously part of the equation,</description>
      </item>
      <item>
         <title>Cloud Comes to Mainframe: Fact or Fiction?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/cloud-comes-to-mainframe-fact-or-fiction</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/cloud-comes-to-mainframe-fact-or-fiction</guid>
         <pubDate>April 18, 2017</pubDate>
         <description>By Sreenivasan Rajagopal, Senior Director of Product Strategy, CA Technologies What comes to mind when you think about developing on OpenStack? Convenience? Instant provisioning? Turning services on and off as developers need them? I doubt you'd use the same terms to the mainframe. But wouldn't it be great if your mainframe team offered the same agility that you get from cloud? Open to opportunities As organizations push forward with digital transformation, cloud operating systems like OpenStack are increasingly popular. OpenStack makes it especially easy to control large pools of compute, storage and networking resources. Everything is managed via a dashboard that keeps administrators in control while enabling users to spin up resources through a web interface. Major reasons for OpenStack's popularity include: Operational efficiency: by making it straightforward to pool multiple cloud resources, administrators can provision development environments almost instantly, which helps to boost agility, accelerate innovation and speed time to market Scalability: it's easy to spin up more instances to serve more users as demand increases Driving digital transformation: an OpenStack survey[2] confirmed that software development is the most popular workload to run on OpenStack, and is seen as indispensable in enabling DevOps. Focus on results In the minds of many application development team leaders, they’re less interested about debating with Infrastructure and Operations (I&amp;O) teams about which platform they should use – distributed, cloud or mainframe. Their only real concern is whether a platform meets these essential criteria: Does it enable developers to work at the speed the business demands? Does it support the business’s development processes? Can it be used throughout the enterprise? Can developers make changes quickly? And for many application development leaders, services like AWS offer enterprise developers the efficiency and scalability they need. I saw an example recently where a system programmer provided an</description>
      </item>
      <item>
         <title>Open Workspaces: the Best of Times, and the Worst of Times - Rally Software®</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/open-workspaces-the-best-of-times-and-the-worst-of-times-rally-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/open-workspaces-the-best-of-times-and-the-worst-of-times-rally-software</guid>
         <pubDate>May 15, 2018</pubDate>
         <description>Trends in office workspaces are often viewed as fads. I prefer to view them as trends. As we progress toward greater understanding of what is needed to develop software well, our views become more refined and nuanced, and this should apply to workspaces, as well. Rather than a model of periodic fads, we should aspire toward the discovery of key guiding principles, and the inspired designs that can result. Imagine you're at a fancy Chinese restaurant with a friend, and the food is fantastic. He wants to know the secret, so he asks the chef what kind of cookware is used. Upon learning that they use woks, he excitedly goes to a home goods store and purchases a wok for himself. With great expectations, he puts his new wok on his stove, adds whatever he has in his pantry into the wok, and the results are€¦ terrible. After tasting his creation, he throws away the wok and declares woks a terrible idea. Recently I've read some rants about Open Workspaces. The complaints basically come down to distractions, and the inevitable conclusion of such rants is that Open Workspaces are a terrible idea. Having worked in all kinds of environments in my two decades as a software developer, including private offices, cubicles, as well as other spaces that are hard to classify, what I can honestly say about the open workspaces I've worked in is this: they have been the best of times, and they have been the worst of times. This is as you would expect, if you think about it: open workspaces are open. Anything can happen. If it's mostly collaboration, life is good! If it's mostly distraction, it's terrible. (As an aside, I'd like to reassure the reader that the open workspace I've been working in at CA</description>
      </item>
      <item>
         <title>Doubling Down on the Mainframe</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/doubling-down-on-the-mainframe</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/doubling-down-on-the-mainframe</guid>
         <pubDate>December 12, 2018</pubDate>
         <description>It's been about a month since we formally announced the successful acquisition of CA Technologies by Broadcom. Mainframers across the globe have been reaching out to me so I thought I'd take this opportunity between the holidays to share more about our future. A month ago, in my video outlining the acquisition, I spoke of Broadcom's decades long track record of being a technology leader in the markets they serve. We see this as being consistent with our goal of creating great technology to make your mainframe a more effective and efficient platform. Much of the feedback I've received focused on Hock Tan's comment stating his desire to &quot;double down for future growth.&quot; With this in mind, let me share with you some of the changes we're making moving forward. First, we understand Agile and Modern Engineering is how we create sustainable and continuous value for you. This manifests itself in how we find and deploy our engineering talent. To put it simply – we're hiring in various locations around the world. I'm excited that we're in a position to increase our talent pool globally. Our focus is on bringing aboard engineering talent working &quot;shoulder to shoulder&quot; to deliver continuous value to market, faster. Second, we understand you need to digitally transform and that requires you to balance investments supporting both innovation and modernization. We, too, face this balancing act. Being part of the Broadcom family enables us to not only continue our current roadmap commitments, but also allows for increases in key investment areas. We're expanding investment in our core products so that you can continue to count on them for many years to come. At the same time, we're making targeted investments in new initiatives such as Zowe and CA Brightside, CA Mainframe Resource Intelligence, CA Mainframe Operational</description>
      </item>
      <item>
         <title>Advancing Network Software to Improve Triage Times</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/advancing-network-software-to-improve-triage-times</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/advancing-network-software-to-improve-triage-times</guid>
         <pubDate>July 30, 2018</pubDate>
         <description>The latest release of CA Performance Management 3.6 delivers even more network software automation to improve NetOps triage times. Network software event notifications are a key workflow element which enable the quick and accurate triaging of problems in your monitored environment. The latest version of CA Performance Managementoffers powerful and automatic script execution for event notifications to improve triage times for network operations teams. A network event is an indication that a significant happening has occurred and provides information to your network software on the health and status of the monitored environment or of the application itself. Events can be routed via email to designated recipients or by forwarding an SNMP trap to external applications. CA Performance Management 3.6 now offers a third action for event notifications: automatic script execution. Let’s first take a look at how event notification scripts are configured. Configuration Since its inception, CA Performance Management, part of the Network Operations and Analytics platform from CA, has used a multi-step wizard to make notifications simple and intuitive to configure: Step 1: Apply name and description Step 2: Define scope to trigger notification Step 3: Specify conditions that must be met The fourth and final step is where you designate which actions will occur when the notification is generated. In the latest version of our network software, CA Performance Management 3.6 adds an additional Script tab in addition to the legacy Email and Trap tabs. Here, you specify an action script that is automatically run when the event occurs. Action scripts reside in a dedicated directory on the CA Performance Center server. Scripting, emailing and trap generation are not mutually exclusive meaning a single event notification can trigger multiple actions. For more information visit the Configure notifications section of the CA Performance Management 3.6. documentation. Parameters &amp; Example</description>
      </item>
      <item>
         <title>Simplify Complexity with CA's Docker Monitoring Tools</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/simplify-complexity-with-ca-s-docker-monitoring-tools</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/simplify-complexity-with-ca-s-docker-monitoring-tools</guid>
         <pubDate>August 30, 2017</pubDate>
         <description>Simplify Complexity with CA's Docker Monitoring Tools Many of today's developers are adopting Docker Containers to help accelerate application delivery and move towards adopting a microservices application architecture. However, this architecture introduces new layer of monitoring challenges to an already complex application environment. Just the dynamic ephemeral nature of these Docker container environments makes it difficult to understand relationships between container, host and application, lacks the ability to understand what changed and know when and where to act when issues arise. Why is it a challenge? To start, Microservices introduce many new smaller applications components to an already complex application environment. This makes it difficult to understand the performance and health of each component and their impact to other services. With the increase in components, users find it difficult to understand application environments and every components relationship using the traditional application service maps. Unfortunately, what you end up with is a big messy map or worse the tool may only allow the ability to zoom in and out - which only works if you know what to zoom in on. What's needed? The ability to view the various layers of dependencies between the application, the container and the host as well as the health of each component. But understanding the intricate relationships is just one piece of the puzzle- visualizing it in a way the human brain can easily understand is another piece all together. To achieve this your Docker monitoring tool will need to support multi-dimensional data and have the ability to assign names or attributes to that data. These attributes become the key to unraveling and pivoting on the data that best matches your needs. Think of attributes in the context of your current playlist. You probably have created specific lists of music based on genera, type of</description>
      </item>
      <item>
         <title>How Do You Prevent Outage Outrage?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/how-do-you-prevent-outage-outrage</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/how-do-you-prevent-outage-outrage</guid>
         <pubDate>September 6, 2018</pubDate>
         <description>For customer and user satisfaction, continuous delivery delivers. People today have higher expectations of digital products and services than ever before, whether they are consumers or just using technology as part of their jobs. Consider: when was the last time you were satisfied by a website or application that took a minute to load, or didn’t work at all? Most likely, you closed the page and went elsewhere. But when outages, unplanned downtime, or even just degraded performance happen for applications and services that users rely on—and have no alternative for—it can be downright rage-inducing and can halt critical business processes for hours. So, how do you prevent this negative user experience? Why Do Outages Happen? The key to recovering from outages, and avoiding future ones, is identifying what causes them in the first place and then changing your approach so that those causes cannot occur in the future. There will always a class of outage that is unavoidable no matter how much fail-over planning you do, but such events are relatively rare and can be minimized with the appropriate tools and a plan. The truth is that most outages are caused by very common things such as: Simple human error Inadequate testing of a release Unexpectedly high traffic after the release is live Configuration problems or oversights Of the preventable causes, human error is the most common—and most easily avoided—reason for release-related downtime. People doing repetitive manual work are prone to making mistakes, and application release processes are full of repetitive tasks. The proven way to avoid these types of problems is through automation. Automation removes human inconsistency from repetitive, standard tasks and enables people to do the creative work and problem solving that machines cannot. Flying High with Continuous Delivery Automation All organizations—even the ones with seemingly perfect</description>
      </item>
      <item>
         <title>The Future of the Mainframe - Automation in a Hybrid Enterprise</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/the-future-of-the-mainframe-automation-in-a-hybrid-enterprise</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/the-future-of-the-mainframe-automation-in-a-hybrid-enterprise</guid>
         <pubDate>April 18, 2017</pubDate>
         <description>While automated IT services and processes have the potential to drive more seamless IT operations, they often exist in silos. This is what Gartner means when they talk about &quot;Islands of automation&quot;[1] in areas like service desk, virtual machine application deployments, storage and networking. Imagine how much extra speed and throughput you'd gain by joining up these &quot;islands&quot; €“ especially for your mainframe platforms that might be sitting in disconnected silos. With that said, here are three opportunities that illustrate my point. 1. Making the mainframe available to line of business developers Automation, in the form of DevOps, helps to drive an agile approach that enables continuous delivery of apps and updates. However, these approaches are less widespread among mainframe developers. Manual processes like provisioning and launching mainframe test environments, or managing test data, effectively make the mainframe less accessible to enterprise developers. In the near future, automation will make mainframe infrastructure available &quot;on demand,&quot; and ultimately as code, so developers can access a development environment on the mainframe as easily as they can in the cloud. 2. Orchestrating disaster recovery for IT operations It's also true that mainframes are less geared towards efficient recovery. With distributed environments, automation enables granular controls over rollbacks; whereas restoring the mainframe is typically a hands-on manual process. However, with orchestration, there is the ability to deliver more granular disaster recovery, as well as the ability to support rollbacks. The goal here is to move towards Disaster Recovery as a Service across the whole IT environment, rather than the current scenario where some infrastructure is recovered via automation and other areas manually. 3. Making mainframe data available to Line of Business users and Big Data analytics Many manual processes are often required before Line of Business users can start unearthing the nuggets of insight</description>
      </item>
      <item>
         <title>What’s New In Spectrum 10.3 Monitoring Software - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/what-s-new-in-spectrum-10-3-monitoring-software-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/what-s-new-in-spectrum-10-3-monitoring-software-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>September 26, 2018</pubDate>
         <description>CA Spectrum is a leading network monitoring software solution that provides network architects, engineers, and operators visibility into a complex ecosystem of traditional and modern software defined networks. As application innovation and user experience become key drivers for a companies' success, it has become increasingly vital for these organizations to effectively monitor their networks to ensure continued customer satisfaction. Therefore, we are pleased to announce the release of CA Spectrum 10.3. We hope that our existing customers will upgrade and utilize a breadth of new features so organizations can monitor and manage these environments more effectively. AIOps for Network Operations CA Digital Operational Intelligence is an analytics platform that provides analysis and machine learning to empower AI-based IT operations. It also helps IT organizations deliver a phenomenal user experience, improve service quality and drive operational efficiencies. Previously, CA Spectrum shared monitored inventory and alarms through CA Unified Infrastructure Management. Now, CA Spectrum alarm, inventory, services, metrics, and group collection data is published directly to the analytics platform. In-context launch from alarms in the AIOps platform to alarm details in CA Spectrum provides more insight to issues. Containerization of CA Spectrum IT teams need more flexibility to deploy new instances and upgrades of CA Spectrum and its components. Continuous integration and deployment has become necessary for customers, particularly managed service providers that need to quickly on board new clients. This brings in the need for containerization. CA Spectrum now supports Docker containerization with the Red Hat OpenShift container application platform for orchestration of on-premise systems or cloud-based applications in as little as five to seven minutes per image. CA Spectrum components for containerization include OneClick, SpectroServer, and Secure Domain Connector images. Reporting Enhancements Some organizations utilize multiple CA Spectrum Report Managers based on size of deployment, organization structure or business model.</description>
      </item>
      <item>
         <title>Solving the Customer Journey Puzzle with Funnel Analytics - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/solving-the-customer-journey-puzzle-with-funnel-analytics-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/solving-the-customer-journey-puzzle-with-funnel-analytics-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>August 5, 2019</pubDate>
         <description>In today’s app-driven world, your users expect nothing short of an exceptional customer experience and you have only one chance to make a good impression, or risk losing to the competition. In fact, 83% of U.S. consumers say having a positive customer experience with a brand is more important than the product itself and “companies that offer the best personal, “individualized” experiences to their customers ultimately reap the benefits of higher revenue growth and improved brand standing and loyalty”. So if delivering a great customer experience is the difference between being successful or failing, where do you start? The first step is to gain an understanding of your users intent and behavior. But often times trying to do so can feel like you are trying to solve a difficult puzzle – your users are a mystery. If this sounds familiar, keep reading. Solving the Puzzle with Analytics In order to really understand your users, you need a solution in place that can deliver the proper metrics – one that can provide insight into the entire end-to-end customer journey and provide data related to conversions, revenue, drop-offs, etc. Often times, IT and app teams have multiple solutions in place to attempt to capture this information. In fact, according to a recent study Dimensional Research, 72% of companies are currently using multiple tools (2 or more) to gather end-user metrics. But what if this could be avoided? DX Application Performance Management (DX APM) contains advanced real user monitoring capabilities which provide real-time insight into app performance and the customer journey across the entire application cycle regardless of device type (web, mobile, wearables). With built in features such as session playback, resource waterfall charts, app flows and funnel analytics – DX APM is designed to help organizations improve app performance and design to</description>
      </item>
      <item>
         <title>Join or Die: The Case for Unifying the API Lifecycle to Transform Digital Experiences - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/join-or-die-the-case-for-unifying-the-api-lifecycle-to-transform-digital-experiences-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/join-or-die-the-case-for-unifying-the-api-lifecycle-to-transform-digital-experiences-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>July 28, 2019</pubDate>
         <description>Join or Die, and its Relevance Today In 1754, a political cartoon attributed to none other than Benjamin Franklin appeared. The cartoon depicted a severed snake, with each piece labeled to represent one of the American colonies. Beneath the picture were these words: “Join, or Die.” The cartoon made a direct, easily understood appeal to readers: The only way the colonies could survive would be through uniting, and working together to pursue shared objectives and defeat a common enemy. Why the history lesson? It struck me recently that dev, sec and ops teams tasked with managing APIs aren’t all that different from our American colonists. On today’s competitive battleground, an organization’s success is increasingly determined by its digital prowess. Digitally advanced companies and new technologies are disrupting competitors and inventing new markets. That’s why adoption of clouds, containers, service mesh and other modern architectures are so pervasive. For these efforts to truly pay off, however, teams that once worked in isolation now need to collaborate and operate in a unified way. And the stakes for this effort are high: If teams keep operating independently, the business’ very survival could be at stake. Unifying the API Lifecycle When it comes to unifying previously disparate teams, APIs represent a strategic asset. By uniting data and logic from many distributed systems, APIs play an integral role in the modern application development architecture. Just like software, APIs have a lifecycle, which must now be managed in an optimal, intelligent, and unified fashion. This is a key requirement in order to fundamentally advance development, agility and insight so teams can thrive amidst disruption, and deliver the optimized digital experiences customers and employees require. Now more than ever, it’s vital to effectively manage the entire API lifecycle, including planning, creation, testing, security, management, discovery, development, and</description>
      </item>
      <item>
         <title>How AIOps Can Help You Deliver Superior User Experiences</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/how-aiops-can-help-you-deliver-superior-user-experiences</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/how-aiops-can-help-you-deliver-superior-user-experiences</guid>
         <pubDate>May 31, 2018</pubDate>
         <description>Digital experience is the new competitive battleground: get the insights you need to win. In an increasingly digital world, IT Operations Management Solutions (ITOM) teams play a critical role in delivering top notch digital experiences to users – whether it be to internal or external users such as customers and partners. The Modern Software Factory neatly summarizes how teams collaborate to design, develop, test, deliver, secure and improve apps. From concept to product and beyond, the Modern Software Factory provides a blueprint for success through agility, security, automation and insights. And when you are differentiating your business on digital experiences €“ insights are especially important. Insights into performance. Insights into availability. Insights into usage and much more. Insight Challenges Exist But there can be challenges to getting these insights. What may seem like a simple app may be comprised of a complex array of APIs, or microservices running on cloud and on-premises infrastructure, which can be highly containerized and virtualized including both servers and networks. How do you know when your API endpoints are down or slow? How do you keep up with all the metrics you monitor without getting alert fatigue on ? With increased SDN adoption, how do you wipe out blind spots in virtualized network functions? Point tools are not the answer. I recently wrote about avoiding the one trick pony in the context of full stack monitoring. These tools can produce blind spots – even if you try to have a point tool for every specific element in the digital delivery chain. With point tools you may also have the problem of data silos with missing context and limits to collaboration to solve problems (app monitoring separate from infrastructure monitoring, etc.). Your Opportunity What if you could get a unified view of digital experiences via a</description>
      </item>
      <item>
         <title>How Smart Car Tech Applies to IT Operations - AIOps</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/how-smart-car-tech-applies-to-it-operations-aiops</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/how-smart-car-tech-applies-to-it-operations-aiops</guid>
         <pubDate>September 17, 2018</pubDate>
         <description>Artificial intelligence, analytics and automation help build efficiencies and organizational growth. Imagine a ride to the airport. Getting into the car, you're greeted by the dulcet tones of Tom Jones. He's been a recent addition to your relaxation playlist, but today's grind doesn't quite put you in the mood for Tom's crooning. Sensing your shift and correlating this with your jam-packed work schedule, the music changes-time for a short, sharp wake-up from the God Father of Punk-Iggy Pop-&quot;I am the passenger-and I ride, and I ride&quot;. Now we're talking! It's a good 45-minute hop to the airport, but you've got plenty of time to make your flight. The vehicle has already assessed the best route, necessary speed changes and steering tweaks based on analysis of truck-loads of cloud and edge data-scheduled roadworks, traffic-signal sequences, accident pattern recognition, weather predictions, infrastructure failure probability-even the fact that Acacia elementary school is holding a sports carnival and Norfolk county is conducting some seasonal bushland back-burning. Everything is a valuable data point-ingested and dissected within a single collective soup of analytical goodness. As you settle back to assess the sales reports automatically presented on your console, the car detects a potentially problematic condition. According to sensor data gathered and compared against the same model of car with similar mileage and driven under the same conditions, there is an 87 percent likelihood of front-left wheel bearing failure occurring on next month's family trip to the snow fields. But it's no big deal. The system has already connected to the dealerships servicing system and ordered the part. Unbeknownst to you, the system has also booked the car in for the work, which will be done after dropping you off at the airport. Plenty of time to do that before the car picks up your daughter up</description>
      </item>
      <item>
         <title>Bridge the Dev/Ops divide through self-service provisioning</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/bridge-the-dev-ops-divide-through-self-service-provisioning</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/bridge-the-dev-ops-divide-through-self-service-provisioning</guid>
         <pubDate>November 9, 2017</pubDate>
         <description>A Playbook for Modernizing the Mainframe, Part 6 Earlier in our playbook, Modernizing Development on Mainframe, my colleague Sujay Solomon outlined the key requirements for DevOps architects to build a robust continuous delivery pipeline for cross-platform DevOps. Success is achieved when DevOps architects have the flexibility to use their preferred industry-standard or home-grown solutions, as well as an easy-to-use UI for configuring mainframe interactions. Only then will they be empowered to architect a mainframe DevOps pipeline capable of mitigating strategic risks, amplifying performance of teams in development, test, operations and security, and managing intelligently against increasing cost pressures. Address the elephant in the room Choice in tooling and friendly UIs make a solid first step towards modernization, but more is required to resolve issues arising from developers not having adequate access to the mainframe. This becomes a huge problem during application testing, which due to the strict governance involves a highly laborious process to meet regulatory compliance. The result? In my discussions with customers, I've learned that simply fulfilling a developer's change request to provision a test environment may take months, with IT operations needing to oversee every step the process. Everyone loses with this outcome. On one hand, development teams cannot bring changes to market at the pace required to anticipate and meet escalating customer expectations. On the other hand, the exorbitant cost of provisioning and then maintaining these environments taxes an already shrinking IT budget. Altogether, it creates considerable friction between development and operations, inhibiting progress towards a more unified organization characteristic of DevOps. Apply an A+ philosophy Before, with access to preferred tooling came greater agility. Now€¦ with automation of labor-intensive processes comes greater autonomy. Autonomy in the form of self-service provisioning affords IT operations freedom from dedicating time and resources to supporting development teams. Namely, development</description>
      </item>
      <item>
         <title>From the Labs: A Mainframe Story</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/from-the-labs-a-mainframe-story</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/from-the-labs-a-mainframe-story</guid>
         <pubDate>June 19, 2018</pubDate>
         <description>If you're working with mainframe platforms, you know that they are expected to run, no matter what. This requires a certain level of continued mainframe optimization, and a rigorous focus on the fundamentals of economics, skills, and availability. But, that might not be enough. It tends to be that leaders ask more of us than just keeping the lights on. Digital transformation – helping grow an organization using technology – is a business trend that is not and likely will never go away. In the spirit of transformation, I share this &quot;from the labs of CA&quot; story. It's a story of how we challenged our product teams to develop a completely new sort of solution, how they rose to the occasion, while along the way embracing new ways of doing things to make it happen. A few years ago, our development teams endeavored to develop a new solution, using artificial intelligence in IT operations, or AIOps. Imagine: the team had to take the concept of artificial intelligence; itself a vague and somewhat abstract idea, select algorithms and then train them to perform a task to predict operational outages using real data and machine learning. Now, they weren't doing this all alone, because our teams develop using agile, involving customers continuously throughout ideation, design, development, and implementation. This ensures that customer's needs were taken into account all along the way. But could we develop fast enough? The solution needed to adapt to a constantly shifting and very hot new market. To get there, the team was willing to adapt the way they did things, even where it required significant change in the way they worked, the way they measured themselves, and much more. Of course, any change starts with the customer's point of view. Customers got involved early in product design</description>
      </item>
      <item>
         <title>Three Ways Agile Software Serves the C-suite</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/three-ways-agile-software-serves-the-c-suite</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/three-ways-agile-software-serves-the-c-suite</guid>
         <pubDate>January 22, 2018</pubDate>
         <description>Agile methodology – and the Agile software that supports your Agile Transformation - impacts virtually everyone in the organization, not just R&amp;D. As responsibility for overall business execution is increasingly shared across the C-suite, company leaders also stand to see great benefit from agile software like CA Agile Central. Today, teams made up of the CEO, CIO and CTO work together to make decisions about company-wide agile business processes and agile software that supports strategic initiatives. This makes sense. As they say, &quot;two heads are better than one.&quot; Even truer when your collaboration includes three, four and even more great minds working together. So, part of championing agile within your organization, is illustrating the top benefits of agile software for each leader. Many people in a company benefit from agile but below I've listed the key benefits for the CEO, CTO and CIO, all of which provide enormous value to the organization as a whole. CTO Benefit : Deliver projects more quickly, with fewer hiccups Most CTOs face a constant flood of &quot;urgent&quot; business requests that far outstrips the capacity of their teams. Increasing speed of delivery will free up resources and allow for faster response to demands. Agile Central is designed to speed delivery and reduce the risk of missed dates. A Coleman Parkes Research study found that Rally speeds up decision-making time by 41 percent and delivery time by a whopping 35 percent. In addition, Agile Central provides support and frameworks for the following agile practices: Frequent feedback cycles and transparent communication to identify misunderstandings that could otherwise hamstring project timelines. Continuous iterating to virtually ensure defects are detected and fixed long before they impact delivery timelines. A retrospective at the end of each iteration to identify processes that could be improved, which then acts as a guide</description>
      </item>
      <item>
         <title>Scale Agile to Extend its Benefits Across the Enterprise</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/scale-agile-to-extend-its-benefits-across-the-enterprise-rally-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/scale-agile-to-extend-its-benefits-across-the-enterprise-rally-software</guid>
         <pubDate>December 14, 2017</pubDate>
         <description>Is scaling agile the answer you've been looking for? Your organization exists within a fast-paced, rapidly changing marketplace. You need to deliver high-quality projects fast. You've already introduced some agile into your development teams and experienced competitive advantages like enhanced collaboration, improved quality, faster delivery of value, reduced cost of development and elevated customer satisfaction rates. Could you scale agile and leverage it further? Looking for a competitive advantage? Scale agile across all IT and Dev teams Scaling agile across your entire project portfolio would mean extending the competitive advantages inherent to agile. But that's not all: It would also provide focus and insight into your organization's highest-value initiatives. It would help to ensure that products work well together and that each provides a return on your investment. You can identify gaps and highlight new requirements at the portfolio level which would minimize ad-hoc projects that could conflict with one other or duplicate efforts. And scaling agile across multiple projects would enhance customer value. Why the whole company should become agile If scaling agile across multiple teams and projects makes sense, does it also make sense to go beyond your IT and Dev departments? Agile isn't limited to software product development; agile methodologies can be applied to many finance and marketing department projects as well. Let's use marketing as an example. Marketers today operate in a fast-paced, multichannel world. They no longer have the ability to spend months designing and implementing large projects. They require speed and flexibility to innovate and respond immediately to market disruptions. Sound familiar? Does it sound like Dev teams? And just like Dev teams are under significant pressure to build quality products fast, today's CMOs are increasingly responsible for rapid business growth. With the pace of across-the-board change they're facing, an agile transformation could be</description>
      </item>
      <item>
         <title>A Network Monitoring Tools Convergence is Here!</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/a-network-monitoring-tools-convergence-is-here</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/a-network-monitoring-tools-convergence-is-here</guid>
         <pubDate>March 6, 2018</pubDate>
         <description>Converging key CA Spectrum and CA Performance Management workflows to drive easy NetOps dashboard experiences. Today, half of enterprises find themselves using 11 or more network monitoring tools and for many, this may be just the beginning. Additional, those surveyed admitted to spending over 70% of their day troubleshooting network outages and less than 50% actually detecting these outages before an end-user was impacted. At CA we truly believe a convergence of network operations is well overdue. Because of rapid advances in networking technology coupled with today's consumer demands, NetOps will find themselves overwhelmed with way too many tools and way too little visibility into network availability. The advantage to the CA network monitoring software convergence is the correlation of network fault (CA Spectrum) and network performance (CA Performance Management), together in one NetOps dashboard. This enables the ability to move away from multiple operational consoles driving simplicity in your ability to correlate different types of data and events. The ability to drive unified fault and performance workflows when triaging availability or service degradation in a single dashboard will no doubt improve operational efficiency and help to support a better experience within your network. Figure 1: Seamlessly integrated network fault and performance data in a single dashboard.[/caption] A key benefit of our network monitoring tools is to provide industry standard tier-1 operational troubleshooting and issue/ticket management workflows using long standing CA Spectrum technologies within a performance dashboard. Triage actions like ping or trace-route are just a click away while opening and managing trouble tickets provides the same functionality that exists in CA Spectrum WebView driven from the CA Spectrum OneClick market leading experience. Figure 2: Easy CA NetOps operational workflows for faster triage times.[/caption] When a device is monitored by CA Spectrum, drill down capabilities are linked to the device</description>
      </item>
      <item>
         <title>3 Ways to Increase the Usefulness of Your Service Catalog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/3-ways-to-increase-the-usefulness-of-your-service-catalog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/3-ways-to-increase-the-usefulness-of-your-service-catalog</guid>
         <pubDate>August 23, 2018</pubDate>
         <description>Achieve Continuous Service with Self-Service Orchestration Service catalogs are a great way to organize your organization's service offerings, but can be underutilized. The front-end of your service portfolio should be optimized to provide users with exactly what they need and save your system admins the time and workload of resolving repetitive, simple request-but too often they deliver a poor user experience instead, leaving users scratching their heads or slamming their keyboards. So how can you make your service catalog maximally useful? Here are our top three tips: 1) Centralize your service offerings as much as possible This might sound obvious, but a service catalog should be a complete, centralized resource for users, which requires compiling the services across all areas of your enterprise and breaking down any silos that exist between different units so that information in the catalog remains up-to-date. 2) Find out what your users are really requesting If you know what your users are looking for, you're better able to provide it-and better able to identify areas that need more dedicated resources. The most frequent service requests can be categorized and given a prominent place in your service catalog, so users aren't left wondering where to find answers. Track the tickets you receive to see what needs to be included and what doesn't. 3) Introduce self-service orchestration The best way to improve your users' service experience and get value out of your service catalog is to introduce self-service with automation. There's no reason to spend your admins' time on requests as simple as password resets, or to let your users wait for help that can be easily automated. When approximately 67% of overall IT user desk cost is spent on personnel, and the average service ticket for support costs $15.56, your team is too valuable to dedicate</description>
      </item>
      <item>
         <title>How to Select the Right Application Performance Manangement Tool</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/how-to-select-the-right-application-performance-manangement-tool</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/how-to-select-the-right-application-performance-manangement-tool</guid>
         <pubDate>August 21, 2017</pubDate>
         <description>The Application Performance Management (APM) market consists of about 12+ vendors who all provide a variety of functionality for monitoring the performance and availability of applications. Selecting an APM tool can be a daunting task as many vendors provide very similar functionality. However, there are some capabilities that you should pay close attention to as it can make a difference in troubleshooting common issues and those pesky hard to find app issues. Here are five key capabilities to consider when selecting an Application Performance Management solution: View Application Topology – A great APM tool not only depicts the entire application environment including dynamic microservices architecture but allows the user to easily pivot on certain aspects such as application, owner, location or business function. This requires advanced logic to be built into the map such as multidimensional database search and analysis features. A &quot;Most Distinguishing Attribute (MDA)&quot; would allow users to easily distinguish and pivot on a variety of map component and common characteristics but could also reflect characteristics that are unique to only their organization. For example, if APM determines that a grouping of problems all share the common attribute of a specific AWS zone, MDA would suggest that the underlying problem is related to the AWS zone itself. By leveraging attributes; as customers supply more business-relevant information to APM, the better the analysis an APM tool can provide. This powerful search and analysis feature becomes critically important in those environments that are highly dynamic such as cloud, containers and microservice application architectures. It's also important in understanding change that has an impact on app performance. Understand Change – Many app issues are caused by a change in the environment such as new build/code, elastic changes, or performance degradations. The tool should be able to show change over time and</description>
      </item>
      <item>
         <title>A System Built for Trust</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/a-system-built-for-trust</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/a-system-built-for-trust</guid>
         <pubDate>September 25, 2017</pubDate>
         <description>What Business and IT Leaders Should Know When Evaluating the IBM z14 This summer has been an exciting one for companies transforming to digital businesses with IBM's release of the groundbreaking z14 – indeed, our mission-essential community never stops moving! In Part 1 of this series, I shared why we in the vendor community believe the new IBM z is a game changer, creating a new System of Trust. As part 2 of this series, I'd like to explain what business and IT leaders should know about CA's support for the new z14. At CA, it is our goal to support you from day one as you evaluate a new z14 system or as you plan to upgrade providing software that delivers new features and capabilities in an agile, continuous manner. We are proud to work in close alignment with IBM to partner with you to transform your business. So, while there are many factors to consider when deciding if you want to upgrade to the z14, the one thing you do not need to be concerned about is the z 14 support you will get from CA Technologies. Our unparalleled support is guaranteed – whether or not you choose to upgrade to z14. What's the new z14? The new z14 is empowered with pervasive encryption capabilities that enable up to 12 billion encrypted transactions to be processed each day. As such, the Z14 is perfectly positioned to facilitate an exponential rise in the amount of corporate data that is encrypted. Such a metamorphosis is critical, since only 2% of corporate data is encrypted€Štoday €“ leaving a huge threat surface wide open to attack. How is CA supporting customers with the z14? At CA Technologies, we want to ensure that you reap the benefits of the new z14 from day</description>
      </item>
      <item>
         <title>Hyperscale cloud = Next-Gen Mainframe</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/hyperscale-cloud-next-gen-mainframe-2</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/hyperscale-cloud-next-gen-mainframe-2</guid>
         <pubDate>March 30, 2017</pubDate>
         <description>Things are going to get interesting as the architectural battles for new workloads such as Machine Learning, AI and next killer apps based on Blockchain intensify. What's in a name? &quot;A rose by any other name would smell as sweet.&quot; Quoting Shakespeare seems apt at this time of the year when we yearn for spring. In my conversations with CIOs' running mission-critical businesses, the discussions around &quot;hyperscale computing&quot; are becoming popular. The question at hand: &quot;How can I get a similar highly-elastic infrastructure such as Uber or Amazon but with the reliability and availability of the mainframe?&quot; Here is the hyperscale computing definition from Gartner's Lydia Leong: &quot;Hyperscale computing is a set of architectural patterns for delivering scale-out IT capabilities at massive, industrialized scale. These patterns span all layers of the delivery of IT capabilities - data center facilities, hardware and system infrastructure, application infrastructure, and applications. Non-hyperscale components can be layered on top of hyperscale components, but the overall architecture is only €˜hyperscale' through the level where all components use a hyperscale architecture.&quot;(Source: Gartner, Hype Cycle for Infrastructure Strategies, 30 June 2016.) A Hyperscale cloud can have millions of virtual servers and accommodate increased computing demands without requiring additional space, cooling, or electrical power. The total cost of ownership is typically measured in terms of high availability (HA) and the unit price for deling an application or data. This got me thinking about IBM's recent investor briefing and how the next generation mainframe (z Systems), expected to ship later this year, actually is set to become the transaction platform of choice for mission-essential workloads. The current z13, already delivering five 9s availability, is the world's fastest computer and probably uses the power equivalent of my cappuccino machine! The z13 analyzes transactions using machine learning with Spark on zOS in</description>
      </item>
      <item>
         <title>I am a Mainframer: Jeff Henry</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/i-am-a-mainframer-jeff-henry</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/i-am-a-mainframer-jeff-henry</guid>
         <pubDate>March 22, 2018</pubDate>
         <description>


 

In our latest &quot;I AM A Mainframer&quot; interview series, Jeffrey Frey, Retired IBM Fellow, chats with Jeff Henry. Jeff Henry is a Vice President of Product Management at CA Technologies. Jeff is responsible for the Intelligence Operations and Automation mainframe portfolio of products, including Mainframe Operational Intelligence and Dynamic Capacity Intelligence. Jeff and Jeff discuss the biggest challenge for the mainframe going forward. Jeff, also talks how to best utilize the mainframe to coexist with the distributed world and the cloud world. Read more
</description>
      </item>
      <item>
         <title>Part 4: Keeping The Continuous in Continuous Delivery - Rally Software®</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/part-4-keeping-the-continuous-in-continuous-delivery-rally-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/part-4-keeping-the-continuous-in-continuous-delivery-rally-software</guid>
         <pubDate>June 6, 2018</pubDate>
         <description>This is the fourth post in a series targeted at helping Product Managers understand the importance of Continuous Delivery. Subscribe here to follow along. The first post in this series explored why Continuous Delivery is critical to making a great product. The second post took a deep dive into a single change delivered via Continuous Delivery to provide an example of what Continuous Delivery might look like in action. The third post explored team practices to support and supercharge a Continuous Delivery process. This post explores how maintain and improve the health of Continuous Delivery over time. As with any process, things will change over time - what was good enough yesterday might not be good enough today. Software changes, infrastructure changes, bugs and waste creep in. So how do you continue to monitor and improve your continuous delivery process if needed to make sure it stays healthy? This post explores this topic through the story of some continuous delivery challenges occurring at CA Agile Central in 2016 - 2017. Let's jump in. A Need for Action It took us a long time to get to the point where we could release CA Agile Central continuously - whenever a change is ready, normally multiple times per day. Years ago, we released every 8 weeks, then every week, then every day. It took a lot of work - with a many setbacks - to gradually remove the blockers holding us back from releasing more frequently. By 2016, most changes were released continuously, when they were ready, and we had Feature Toggles and Rolling Deploys to safely release changes to our users. Around this time, I was the Product Owner for our 10 ft Pole development team. Our job was to make it easy for other teams to continuously deliver changes to</description>
      </item>
      <item>
         <title>ChatOps Critical Piece for Modern Infrastructure Monitoring Strategy</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/chatops-critical-piece-for-modern-infrastructure-monitoring-strategy</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/chatops-critical-piece-for-modern-infrastructure-monitoring-strategy</guid>
         <pubDate>March 29, 2018</pubDate>
         <description>Do you use ChatOps tools and/or processes as part of your IT infrastructure monitoringstrategy? Learn why companies are paying attention and how you can get on board too. Collaboration In A DevOps World Today's IT operational teams are pressured for proactively ensuring reliability of infrastructures that are becoming increasingly dynamic and hybrid in nature. They must rapidly resolve issues that have been identified by their hybrid IT and tools while collaborating with various groups spanning cloud, applications and networks. With the adoption of DevOps, in various organizations automation and collaboration is even more critical to iteratively improve and scale performance of applications &amp; supporting infrastructures. As the number of applications and the types of infrastructures grows, IT will need to collaborate with stakeholders frequently. ChatOps Tools To the Rescue ChatOps tools can help you significantly in making this collaboration process easier, faster and traceable. So what is ChatOps? Well here is an holistic definition: &quot;ChatOps is a collaboration model that connects people, tools, process, and automation into a transparent workflow&quot;. At a basic level you can think of it as a popular messaging app on steroids that allows you to share IT operational data across your entire organizations. Slack is a great example of a simple tool (or messaging app) to understand the concept. Sharing IT operational data during an incident in a Slack window is faster than scheduling a conference call between 20 folks on conference on bridge. It is also suitable for sharing ideas related to continuous improvement of an application that say an IT administrator came up with while noticing a performance trend on an infrastructure device. At an advance level these ChatOps tools can be augmented with ChatBots or AI that give you the ability to query data or provide more context during the same chat</description>
      </item>
      <item>
         <title>To Achieve Continuous Delivery, Shift Automation Left</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/to-achieve-continuous-delivery-shift-automation-left</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/to-achieve-continuous-delivery-shift-automation-left</guid>
         <pubDate>July 10, 2018</pubDate>
         <description>How Continuous Delivery Automation Underpins the Modern Software Factory The software delivery lifecycle is expanding. In barely a decade, the majority of IT infrastructure has shifted from physical servers in data centers to cloud-based computing endpoints. We see more frequent deployments, staging is disappearing as an environment and the slightest change to a process can have huge implications for the complex serverless ecosystems found in many organizations today. With continuous delivery automation, you shift automation left and empower your ability to keep ahead of these changes, while also supporting legacy systems, whether or not these systems have been migrated to the cloud. Indeed, many organizations mistakenly confuse their data center migration effort as the destination of their digital transformation efforts rather than seeing it as a step in the right direction. The final state of becoming a digital enterprise is to become a continuously improving, safe and on-demand modern software factory. Everything is Code Not long ago, working in ops meant handling physical hardware, manually configuring servers, and writing shell scripts to automate repetitive tasks. Today, cloud-based infrastructure is described, configured and managed with code. Code describes and controls everything from environment firewalls, to network configurations and security policies. This means that application environments can all be stood up quickly and cheaply-but this ease and flexibility can also mean that making adjustments to one part of the system can break something else. A spaghetti code sequel appearing in infrastructure-as-code source files will not only induce error-prone behaviors of a single app but can actually have unforeseen and domino effect consequences for the entire app ecosystem. The practice of writing code is a new skill that many ops professionals are acquiring, but authoring code isn't enough. They must also align with modern software delivery lifecycle best practices like agile methodologies. This</description>
      </item>
      <item>
         <title>What Does Rotisserie Chicken and SD-WAN Have in Common?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/what-does-rotisserie-chicken-and-sd-wan-have-in-common</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/what-does-rotisserie-chicken-and-sd-wan-have-in-common</guid>
         <pubDate>January 21, 2019</pubDate>
         <description>The Chop-O-Matic, Ronco Pocket Fisherman, and the Showtime Rotisserie &amp; BBQ. Some great kitchen inventions brought to market by the one and only Ron Popeil. I still remember watching infomercials on TV full of fantastic demos, too good to be true deals, and amazing pitch lines. In particular, it's the pitch used by Mr. Popeil for the &quot;Showtime Rotisserie BBQ&quot; that we're going to talk about today. I can still hear the audience and Ron shouting &quot;set it and forget it!&quot;. Simple and effective enough to help propel Popeil's sales of over 8 million rotisserie BBQs. We've seen a lot other other innovations over the years from inside the kitchen to inside the data center that promised the same &quot;set it and forget it&quot; simplicity. One of the latest in the IT technology space to perhaps associate itself with this catchy pitch line is SD-WAN. Can SD-WAN provide the same &quot;too simple to screw up experience&quot; while at the same time saving piles of cash in communication service provider costs? Saving money probably, but a &quot;set it and forget it&quot; approach will leave you missing out on some of the great advantages and cost saving opportunities of SD-WAN. The keys to the kingdom when talking about SD-WAN is the various policies that control how traffic is going to be handled for various types of applications. These policies are the brains of the performance based routing approach used in SD-WAN. The problem however is that they are not designed to learn. The static latency, loss, and jitter thresholds for applications do not enable an intelligent approach taking into consideration historical performance and costs to deliver depending on service quality at various times. Figure 1: Visualizations such as NetOps heat charts enables operations to easily spot times on the network where performance</description>
      </item>
      <item>
         <title>Machine Learning vs. Machine Analytics Supervised and Unsupervised Learning: A Primer</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/machine-learning-vs-machine-analytics-supervised-and-unsupervised-learning-a-primer</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/machine-learning-vs-machine-analytics-supervised-and-unsupervised-learning-a-primer</guid>
         <pubDate>April 17, 2017</pubDate>
         <description>In this series of blogs, we've explored topics like AI and machine learning, and investigated their impact on IT operations. In today's discussion, I'll be offering a primer on two different versions of machine learning: supervised and unsupervised learning. Understanding these distinctions illustrates how far usable machine learning has come. And I'll outline how unsupervised learning can make a big impact on your IT operations in at least three fundamentally important areas. Supervised learning The defining characteristic of supervised learning is that both inputs and expected outputs are known in advance. You program a machine to recognize an input and train it to deliver the desired output. I sometimes use the example of colored flags to illustrate this point. You teach the machine to recognize blue flags, red flags, orange flags and so on, tuning the results again and again until you get 100% accuracy. This kind of process works fine with simple stimuli and basic tasks. But as we all know, real-world business and IT environments are anything but simple. For example, replacing the colored flags with colored balls would require one to start the supervised learning process all over again. Unsupervised learning Instead of machines being hand-trained by data scientists, unsupervised learning uses algorithms that identify consistent, coherent and recurrent patterns in data. Once the algorithm identifies these patterns, it's able to autonomously identify causality €“ i.e. relationships within data that flag when future issues are likely to occur. This is a key evolutionary step toward the future. Machine learning built for the real world Understandably, some IT leaders are skeptical about unsupervised learning, and whether it is truly capable of delivering meaningful insights about vast and complex enterprise IT infrastructures. It is. There are approaches that are designed to work at a massive scale for machine learning,</description>
      </item>
      <item>
         <title>Release Management Best Practices: Defined</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/release-management-best-practices-defined</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/release-management-best-practices-defined</guid>
         <pubDate>January 4, 2018</pubDate>
         <description>Ineffective release management can have serious consequences to a business's bottom-line. Implement best practices. Best practice concepts are far from revolutionary. In fact, they permeate almost all aspects of our lives. Although they're not considered hard-and-fast rules, there's no doubting they're the guidelines to follow. After all, there are countless ways of baking a cake, but the likelihood is Duff Goldman's recipe is going to yield the best results. Why? Because it's tried and tested, written, rewritten, practiced, honed and perfected. It has built upon foundations laid by others, and then refined. The same goes for release management best practices. Like any process, release management has evolved and matured over time. There's now a whole host of software available, which seeks to manage and automate the release process. As a consequence, what was once considered €˜best practice' is now outdated, and far from optimal. The release process used to be near-enough entirely manual. Managers would spend their days making batch files, compiling checklists, running manual builds and configuring .ZIP files in order to deploy. As you can imagine, if a step were to go wrong€¦ oi vey. Although the means to an end have changed, the goal of any IT professional remains more or less the same and can be succinctly summed up: to move code from dev, test or staging to production. Yet, in reality, it's never as simple as it sounds. Unfortunately, unlike baking, there aren't hosts of recipe books to choose from. There's guidance to be found, but it's more abstract than whether you should add a pinch of salt here or a dash of lemon there. But these guidelines should form the foundations of release management best practices. Teams who fail to follow these suggestions are likely to run into difficulties. Collaboration is King Crucially, release</description>
      </item>
      <item>
         <title>Part 2: Is Your Network Monitoring Solution Application Aware?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/part-2-is-your-network-monitoring-solution-application-aware</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/part-2-is-your-network-monitoring-solution-application-aware</guid>
         <pubDate>April 8, 2018</pubDate>
         <description>How CA helps in Application-Aware Network Performance Monitoring and Diagnostics (App Aware-NPMD). In Part 1 of my blog series on App-Aware NPMD; we looked out how comprehensive network monitoring tools deliver enhanced visibility into how network performance can affect the application experience. In Part 2, I take a look at solutions from CA Technologies that enable application-aware network performance management (AA-NPM). These solutions deliver comprehensive, centralized views of all the metrics and measurements needed to understand, manage and optimize performance of critical applications running on your networks. CA Technologies delivers several robust products, including CA Application Delivery Analysis, CA Network Flow Analysis, and CA Unified Communications Monitor that can be used in tandem or individually to address a range of technological and business imperatives. Through these solutions, an enterprise can leverage a unified, network operations view of all the metrics being gathered, including application response times, network flow data, resource capacity and voice and video quality of service. And more, such as network topology, performance metrics, and alarms. Furthermore, these network monitoring solutions feature the open standards support that enables them to be effectively integrated with a range of third-party and custom infrastructure and network tools. CA Application Delivery Analysis Understanding application response time between infrastructure and network components is critical to managing the end-user experience, which is ultimately the most important measure of network monitoring and performance. CA Application Delivery Analysisâ delivers an end-to-end response time monitoring solution that enables your IT team to gain the insights it needs to optimize the end-user experience for both traditional and software defined networking (SDN) environments. With this solution, you can isolate the source of bottlenecks and verify the performance of applications delivered over the network. CA Network Flow Analysis CA Network Flow Analysis allows administrators to quickly identify top users and</description>
      </item>
      <item>
         <title>State of the Art Container Monitoring with CA APM</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/state-of-the-art-container-monitoring-with-ca-apm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/state-of-the-art-container-monitoring-with-ca-apm</guid>
         <pubDate>March 23, 2018</pubDate>
         <description>In our first podcast, we dicussed some of the challenges of container monitoring and key factors that should be considered for a successful monitoring strategy.

In the second podcast in this series, CA's Amy Feldman, Director of Product Marketing and Andreas Reiss, VP of SWAT Innovation will dive deeper into CA's container monitoring solution and how it enables you to gain a better understanding of application performance in containerized environments.

Please have a listen and share your thoughts with us in the comments. 


To learn more about CA's solution, visit our Docker Monitoring page.
</description>
      </item>
      <item>
         <title>Update your Network Management User Attributes via Web Services</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/update-your-network-management-user-attributes-via-web-services</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/update-your-network-management-user-attributes-via-web-services</guid>
         <pubDate>December 9, 2018</pubDate>
         <description>Did you know you can update CA Spectrum user attributes using Web Services? Application Program Interfaces (APIs) have become a critical part of the business to ingest and extract any kind of data from any online resource. APIs are highly beneficial to network management as well as automation, integration and personalization of systems. CA Spectrum provides a comprehensive API functionality that can be leveraged by users or developers to integrate with any 3rd party solution. Some of the CA Spectrum network management elements you can access via APIs are devices, events, models, attributes, landscapes, alarms, subscriptions and associations. In this tutorial, we will walk through how to read and update user attributes from/to CA Spectrum models. This kind of enrichment is extremely useful to populate your CA Spectrum inventory with user or business information such as &quot;service associated to the device&quot;, &quot;virtual DC of the device&quot;, owner, purpose or any other variable that is relevant for your business. Reading a single network management attribute Reading a single attribute of a CA Spectrum model is very straight forward. We will use the &quot;model&quot; API resource for this purpose. The &quot;model&quot; resource is documented in the Web Services API Reference Guide in CA Spectrum documentation. Basically, this resource is used to create or delete a model and to read or modify model attributes. The API URL to read an attribute requires 2 parameters: Model Handle Attribute ID NOTE: The model handle and attribute ID can be fetched from the Attributes tab in OneClick console or via API (http://:/spectrum/restful/devices?attr=0x1006e) The API method to use is a &quot;GET&quot; and the URL is: http://:/spectrum/restful/model/?attr= The expected output is displayed below. As can be seen, it returns the model handle, attribute ID and the current value (&quot;MadridVDC&quot; for this example). [caption id=&quot;attachment_1053&quot; align=&quot;alignnone&quot; width=&quot;244&quot;] Figure 1:</description>
      </item>
      <item>
         <title>Top 3 Challenges To Creating an Effective DRP - CA Automation</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/top-3-challenges-to-creating-an-effective-drp-ca-automation</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/top-3-challenges-to-creating-an-effective-drp-ca-automation</guid>
         <pubDate>June 12, 2019</pubDate>
         <description>Whatever the industry, the times when businesses could continue operating in spite of a major computer glitch is over. Meaning the Business Continuity Plan (BCP) has become a strategic asset for a company's executives faced with mounting risks, be they criminal, technological, climatic or terrorist. As part of the Business Continuity Plan, the Disaster Recovery Plan (DRP) aims to deliver sustained IT services and reducing any downtime to an absolute minimum. The DRP objectives are specific to each company and are mostly measured with the help of two metrics: the Recovery Time Objective (RTO) which represents the maximum time allowed before recovery, and the Recovery Point Objective (RPO) which specifies the amount of data loss that the business can accept. Disaster Recovery, a leap into the unknown If designing a DRP is something that most companies are well versed in, triggering actual recovery operations is most of the time a big leap into the unknown. A survey by Forrester and the Disaster Recovery Journal delivers interesting insights about companies' readiness for handling a disaster: Only 18% of companies from the survey believe they are fully prepared to trigger the disaster recovery processes More than 45% have said they do not have central coordination for the disaster recovery processes Only 19% report they are able to test disaster recovery processes more than once a year, and nearly 21% just never test them. However, for a business to survive a catastrophic event, it is mandatory that the IT organization responds fast. It is the RTO that determines the acceptable limit before business activities are severely impacted. Problem detection, decision to enact DR, execution of recovery procedures, systems checks after recovery ... the total duration of these operations must be kept contained under the RTO that has been agreed within the DRP. So,</description>
      </item>
      <item>
         <title>Test Before You Patch</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/test-before-you-patch</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/test-before-you-patch</guid>
         <pubDate>August 21, 2018</pubDate>
         <description>Keep your ERP applications up-to-date and secure with automated SAP System Copy For ERP administrators today, security is always top of mind. But recent warnings from the US Department of Homeland Security about ERP vulnerabilities make securing your Oracle and SAP applications even more urgent. Data breaches and unauthorized access can disrupt business-critical processes and negatively impact your customers. Staying up-to-date with security patches is the best way to make sure this doesn't happen to your organization, but good protocol requires that you first test patches against a separate test instance of SAP to confirm that they won't impact operations of your production instance. So how can you speed up testing and implement these important patches as soon as they're available? Automating the SAP system copy process is one way to clear the path of the obstacles that keep you from better security. Break the Barriers that Keep You from Updating In the report I mentioned above, the United States Computer Emergency Readiness Team (part of the Department of Homeland Security) warns businesses that ERP applications are a tempting target for cyber attackers, who have been taking aim at known vulnerabilities in SAP and Oracle. ERP software in on-premises, public, private, and hybrid cloud environments are all at risk, as are environments that don't have direct Internet connectivity. While this sounds alarming, the recommendation for protecting your ERP applications is simple: always implement the updates in the security patches that Oracle and SAP release regularly for customers. The challenge of patch implementation comes when patches include new or updated functionality in addition to security updates. That updated functionality could break something if pushed forward without testing and needs to be vetted first. The usual approach is to do so on a copy of the production system and test the patch</description>
      </item>
      <item>
         <title>Two-Track User Experience (UX) Research: The Long Game</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/two-track-user-experience-ux-research-the-long-game-rally-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/two-track-user-experience-ux-research-the-long-game-rally-software</guid>
         <pubDate>December 11, 2017</pubDate>
         <description>When I began work on Rally Software team as the new User Experience Researcher, we didn’t have a user research practice, so I set about creating one. I had a strategy, and for a long time, I thought I was executing that strategy with pretty good success. Only recently did I realize that I had failed. In this post, I’ll lay out my original strategy and what made it seem like a good one. Then I’ll describe a better way — specifically, creating two tracks for user research, one that focuses on the immediate needs of our product organization and another that anticipates (and guides) their future needs. My strategy (and why it was wrong) At Rally, we talk about user experience (UX) design as a 4-stage process. (Caveat: we don’t exactly believe there are four distinct stages, but they give us words to talk about design and the activities we do.) It looks something like this: I wanted to create a culture around user research at Rally. My original strategy for doing that was to begin at the end stage, Measurement, and move backwards towards Discovery. That may seem counter-intuitive but there were a few reasons for that approach: Validity: As a researcher, it’s much easier to answer questions towards the end of the UX process than the beginning. E.g., “Does this solution work for users” versus “What do users need?” Opportunity: We were building features, but we didn’t really know if they were working. As an organization, we needed to get ahead of that. Once a feature ships, it’s much harder to redirect engineering resources to fix any issues that crop up. Maybe even more important, every failure burns user experience capital with our users. Value: We needed to show the value of UX research to our stakeholders.</description>
      </item>
      <item>
         <title>Automating the Blockchain Explained</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/automating-the-blockchain-explained</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/automating-the-blockchain-explained</guid>
         <pubDate>February 6, 2018</pubDate>
         <description>How to automate blockchain with the CA Automic One Automation Platform Blockchain applications are a hot topic at the moment and although the technology is in its infancy, more and more companies are investing resources into it. Primarily organizations are asking themselves what role and value this emerging technology can play in their business. In this blogpost, I'd like to demonstrate the benefits of blockchain automation with the CA Automic One Automation Platform. Blockchain in a Nutshell Figuratively speaking a blockchain can be viewed as an account ledger. Transactions (debits and credits) that take place are recorded on the ledger as data blocks. Blocks are appended each time a new transaction takes place, linking a former block with a new one, creating a chain-like structure €“ hence the term &quot;blockchain&quot;. Debit and credit transactions that take place on the ledger are recorded and there is a clear chain of these transactions, making the entire process transparent. No individual, group or organization controls this ledger. Transactions are transparent and safe because each blockchain party holds a copy of the blockchain ledger. Blockchain applications allow you to model the ownership and exchange of virtual assets in a business network. An asset could be for example, a virtual coin, a registration of a license plate, or a property. Compared to classical database applications, there is no single owner of the data nor a single application that works with the data. Every party in the network has its own copy of the data and can change it at will. A change is done in the form of a transaction; e.g. &quot;Buy Property&quot;. These transactions trigger customer code that changes the state of the assets accordingly (e.g. sets a new owner). This is often referred to as &quot;smart contract&quot;. Transactions are shared with all parties</description>
      </item>
      <item>
         <title>Three Essential Elements to Digital Experience Monitoring</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/three-essential-elements-to-digital-experience-monitoring</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/three-essential-elements-to-digital-experience-monitoring</guid>
         <pubDate>June 15, 2018</pubDate>
         <description>How a holistic approach to digital experience monitoring can help you deliver premium experiences on any device. Phone and tablet apps old news. Desktop/laptop apps even older news. Enterprise apps same. As users, we're accustomed to these kinds of apps and have certain expectations of them. New devices, new OSs and new platforms keep the innovation rolling, but these forms of apps are well known entities. TVs have had apps for a while. Set top boxes &amp; streamers: ditto. Refrigerators: yes. Cars: sure, why not. There's even a smart toaster. I'm sure you can add to the list. The point is digital experiences through apps are an increasingly pervasive part of our physical world beyond our phone, tablet and laptop screens. When these app experiences go well, all is good. When there's a glitch (can you say &quot;buffering?&quot;), there's a problem. If the glitch happens more than once big problem. What can organizations who provide these app experiences do to measure experiences and remedy issues? This is where digital experience monitoring comes in. With digital experience monitoring (DEM), organizations can gain visibility into the performance and even the effectiveness of their apps. Many DEM solutions stop here. However, the best DEM solutions can also find problems fast, determine the root cause all the way down to a piece of code or infrastructure element. Finally, a DEM solution should give recommendations for how to fix any issues causing a negative experience and provide guidance on how to prevent them. Key capabilities for digital experience monitoring include: App Experience Analytics At its core this is end-user experience monitoring across all app channels to measure the experience and monitor for problems. In addition, crash analytics can quickly determine which apps are experience problems on which platform, device, carrier and versions. User session playback</description>
      </item>
      <item>
         <title>Continuous Delivery and Robotic Process Automation</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/continuous-delivery-and-robotic-process-automation</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/continuous-delivery-and-robotic-process-automation</guid>
         <pubDate>August 13, 2018</pubDate>
         <description>Why Robotic Process Automation Needs Continuous Delivery Robotic process automation (RPA) is often tipped to be the next major transformative innovation within IT. Yet for all the hype, a recent report from McKinsey suggests its success thus far has been limited to a few specific examples, and the majority of organizations implementing it have struggled to gain any meaningful benefit. The level of success a company will have from an RPA investment is very much dependent on how the bots are deployed. If the creation and delivery of bots is slow, error prone and unreliable, it will not be possible to harness the potential of leading tools such as Automation Anywhere or UiPath. The innovation and competitive edge an organization hopes to gain from their initial investment in RPA can be easily diminished by excessive manual tasks, unplanned work, wait time and technical debt if bot deployment is not efficient. Yet, despite these challenges, RPA adoption shows signs of being valuable in the long run. Forrester has estimated that by 2021, there will be over 4 million robots responsible for administrative tasks and the market will be worth $2.9 billion. That means companies looking to remain competitive must proactively address the challenges of robotic process automation today, or risk being left behind by those who do. While there isn't a quick fix, there are some steps you can take to ensure that your bots can be deployed at will, quickly and reliably. Overcoming Barriers to Effective Robotic Process Automation RPA is designed to make your software processes faster and more efficient, freeing up time for innovation. Ironically, however, the bot deployment process is often the antithesis of this and hinders any transformative initiative.The challenges facing users of RPA include: Streamlining the complexity and time requirements of creating, installing and deploying</description>
      </item>
      <item>
         <title>GA Announcement: CA Unified Infrastructure Management Release 9 Service Pack 1 (CA UIM 9.1.0)</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/ga-announcement-ca-unified-infrastructure-management-release-9-service-pack-1-ca-uim-9-1-0</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/ga-announcement-ca-unified-infrastructure-management-release-9-service-pack-1-ca-uim-9-1-0</guid>
         <pubDate>April 30, 2019</pubDate>
         <description>CA Technologies, a Broadcom Company, is pleased to announce the general availability of CA Unified Infrastructure Management Release 9 Service Pack 1 (CA UIM 9.1.0). As part of our ongoing commitment to customer success, we regularly release updated versions of our products that will make the monitoring process more efficient and secure while adding new capabilities and integrations.

Key capabilities included in this release are:


	Enhanced protection of data in transit with secure bus configuration among hub and robot(s)
	Out-of-the-box enterprise level availability reports
	Priority based alarm policy definition at container group level with inheritance
	Enhanced Operator Console with data filtering and dynamic group management
	Richer integration with the CA AIOps platform for machine learning driven root cause analysis
	Smarter CA Business Intelligence JasperReports® Server 7.1.1


Download your copy of CA UIM 9.1.0 online at CA Support where you can also utilize CA's case management system. To install the Service Pack on CA UIM 9.0.2, follow the prescriptive procedures on DocOps . If you have any questions or require assistance, contact CA Customer Care.

To stay connected and to learn and share with other customers, join and participate in the CA Unified Infrastructure Management Message Board on our CA Infrastructure Management Global User Community To review CA Support lifecycle policies, please review the CA Support Policy and Terms located on the CA Support page.

Lastly, don't forget to register for our webinar &quot;Why Adopt CA UIM 9.1.0 And How To Upgrade Today&quot; to learn more about our new features and to get a step-by-step guide on how to upgrade to CA UIM 9.1.0 with ease and reliability.
</description>
      </item>
      <item>
         <title>Innovation Goes Through Strategic Realization Office</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/the-strategic-realization-office-innovation-s-new-best-friend-clarity-ppm-project-portfolio-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/the-strategic-realization-office-innovation-s-new-best-friend-clarity-ppm-project-portfolio-management</guid>
         <pubDate>August 2, 2018</pubDate>
         <description>Organizations today must do more than execute on strategy€”they must focus on continuous innovation. That's leading progressive organizations to rethink how they turn strategy into reality. Project portfolio management (PPM) became mainstream a few years ago as the process for turning strategy into execution. That has led to a number of improvements, but it has also created a number of challenges. Those challenges have encompassed everything from too much portfolio work being driven €œbottom-up&quot; to difficulties in evolving and adjusting the portfolio when new opportunities and threats arise. As a result, progressive organizations are implementing a new concept: the strategic realization office, or SRO (sometimes strategy execution office or simply strategy office). In the next few blog posts we want to look at the SRO in a little more detail, exploring how it supports organizational success. The concept of the SRO is to create a dedicated function that allows organizations to do more than simply execute on a defined strategic plan. It recognizes that the highly disruptive, fast-moving environment organizations exist in today requires an approach that is adaptive, progressive and focused on growth. Advancements in technology combine with rapidly changing customer demands to continuously drive innovation, while competitor actions create constant threats to success. In this climate, organizations cannot rely on cost reduction and efficiency to drive long-term success; they must focus their strategic efforts on growth. In today€™s world, consistent, substantive growth only comes from the ability to innovate continuously, and that€™s where the SRO comes in. The SRO owns organizational strategy. It facilitates organizational planning and ensures that all strategic activities are driven from the top of the organization. In practical terms, this means: Working with the C-suite and related executive roles to define the goals and objectives for the next period, ensuring that they align with</description>
      </item>
      <item>
         <title>Broadcom Recognized as a Leader in the 2019 Gartner Magic Quadrant for Enterprise Agile Planning Tools</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/broadcom-recognized-as-a-leader-in-the-2019-gartner-magic-quadrant-for-enterprise-agile-planning-tools-rally-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/broadcom-recognized-as-a-leader-in-the-2019-gartner-magic-quadrant-for-enterprise-agile-planning-tools-rally-software</guid>
         <pubDate>April 24, 2019</pubDate>
         <description>For a third consecutive year, Gartner named Broadcom (CA Technologies) as a Leader in the Gartner Magic Quadrant for Enterprise Agile Planning Tools. This marks the sixth consecutive year that the Broadcom (CA Technologies) has been named a Leader by Gartner[1]. We believe this recognition validates our strong vision here at Rally, as well as our ability to deliver innovation that meets the demands of our customers and uniquely addresses market pressures. The state of enterprise agile planning We see the boundaries of agile management expanding far beyond team-level planning. While this capability forms the foundation of most agile tools in the market today, we see customers struggling when it comes to scaling agile across the enterprise. To successfully achieve this objective, agile tools must expand beyond the confines of traditional team-level planning to include support for complex multi-train planning that provides visibility at all levels of the organization. Today, IT leaders are often faced with a number of challenges including the rationalization of their tech stack, the ability to quickly deliver quality software for their customers, and ultimately, to reduce costs through support and scaling of lean-agile development principles. However, the ever-expanding volume and variety of data have made it difficult to manage product development, agile programs, and the SDLC in general. Now more than ever, enterprises require visibility across delivery groups, trains, and programs, while also maintaining guardrails that ensure the quality and consistency of the data being reported. As a result, we see the need for an agile management solution that can support any variety of team experiences, while also maintaining visibility when scaling. Reintroducing Rally We believe Rally (formerly CA Agile Central) is one of the most compelling products in the market €“ bringing along our rich history of agile leadership to customers seeking to transform</description>
      </item>
      <item>
         <title>Why Bother Doing Infrastructure Monitoring When You Don’t Own Any Infrastructure?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/why-bother-doing-infrastructure-monitoring-when-you-don-t-own-any-infrastructure</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/why-bother-doing-infrastructure-monitoring-when-you-don-t-own-any-infrastructure</guid>
         <pubDate>April 5, 2018</pubDate>
         <description>Working in the IT Infrastructure Monitoring space for some years now, I’ve been asking myself this question lately: In the times of cloud computing, why should you still bother spending time and resources on Infrastructure Monitoring. And my initial reaction was: there is none! Needless to say, I seriously wondered about my career choices for the last couple of years. But the more I thought about it, the more I realized that there was more to this question. If your entire definition of Infrastructure Monitoring is monitoring the cpu /memory usage and power supplies of your servers and the air conditioning of your data centre, then – yes – if you switch to a containerized cloud computing approach there is probably not much left in this discipline. IT Infrastructure Monitoring has evolved over the years. Long ago are the times of simple reactive lights-out monitoring of low-level components. Today’s tools perform automatic baselining of system (and even more important today: application) metrics and apply dynamic thresholding to them, they perform trend analysis on each metric or combinations of metrics to proactively alert on future issues. The breadth of monitoring aspects that today’s tools cover has also evolved dramatically: Application-specific infrastructure monitoring for off the shelf components such as MongoDB Hadoop are covered as well as good old SQL-databases Synthetic response time monitoring for your middleware- and backend-components is there too. All of this with zero-touch automation of course, from the automatic device discovery to the detection of installed applications (be that they’re running inside containers or not) to the automatic provisioning of the monitoring policies. The Age of “Apps” Don’t get me wrong: The age of “apps” does move CA Application Experience Analytics and CA Application Performance Monitoring into the focus. Having fast response times for the users of your</description>
      </item>
      <item>
         <title>Zowe: Increasing the Power of Hybrid IT Through Open Source</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/zowe-increasing-the-power-of-hybrid-it-through-open-source</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/zowe-increasing-the-power-of-hybrid-it-through-open-source</guid>
         <pubDate>October 21, 2018</pubDate>
         <description>Hybrid IT is all about harnessing the power of the digital age to deliver, analyze, adapt and innovate your enterprise's critical functions-regardless of technology or platform. The mainframe is key to your hybrid IT strategy containing a rich repository of data and applications delivered through unmatched scalability and security. Mainframe organizations however, are challenged with making the mainframe more readily consumable across all platforms and by the next-generation workforce-limiting the ability to deliver value to the business. To help businesses meet this challenge, mainframe industry leaders CA Technologies, IBM and Rocket Software joined forces to create a new open-source mainframe software framework. In collaboration with the Linux Foundation's Open Mainframe Project, we created Zowe-a modern interface for z/OS that brings open source to the mainframe. Zowe enables application development and operations teams to securely manage, control, script, develop and interact with the mainframe like any other cloud platform. In this way, Zowe is key to building an integrated and agile mainframe. The Case for Open Source How will Zowe benefit the mainframe platform, ecosystem and our industry? Ensure long-term mainframe innovation. Participate in the initiative and collaborate with other enterprises in further developing the mainframe and preparing it for the future of business. As an open-source project, you'll leverage the latest innovations from the community to help extend the viability of the platform. Adopt new technologies and capabilities. Community collaboration around the Zowe framework and standards allows for more integration opportunities and choices with respect to technology-providing the agility you need to effectively respond to market changes using the latest innovations. Address the mainframe skills gap. Zowe provides simplified and familiar infrastructure services for the mainframe. Programmers can interact with the mainframe using familiar tools and frameworks-increasing productivity and ensuring business continuity. It's Your Move As an industry, I encourage</description>
      </item>
      <item>
         <title>Automation Trends and Opportunities</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/automation-trends-and-opportunities</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/automation-trends-and-opportunities</guid>
         <pubDate>May 8, 2019</pubDate>
         <description>Enterprises around the globe are in the midst of an arms race for innovation and agility to take advantage of opportunities that data and modern technical abilities such as automation avail to them. At the same time, there is an ever-increasing expectation to do things faster and focus on the experience of customers and employees alike. As organizations embark on their individual journey to modernize through automation, they want to capitalize on tried and tested technology, or collection of technologies, and learn from the experiences of early adopters. Learning from early adopters is essential because early adopters have a tendency to pursue automation haphazardly and their investments end up being non-strategic, poorly linked to the business and undermanaged. As such, many automation trends fade due to disappointing gains in productivity, or performance, or both. To successfully convert automation investments into value, companies need to put automation to work at the right pace and right place. Learning from the successful adoption of automation by companies, and their ability to convert investments into value, we foresee a select set of trends in automation continuing to take root and avail opportunities for companies. Shift Closer to Outcome with Business-centric Automation Experience within this sphere showcases how automation is pushing out of its traditional stronghold within IT and moving closer to the frontlines of business outcome. Recent advancements in automation such as RPA (Robotic Process Automation) and cognitive automation are complemented by ancillary technologies such as cloud, embedded software, machine learning, blockchain, etc., to expand the role of long-established automation keystones such as workload automation and service orchestration to become key enablers of enterprise agility and digital transformation. Automation as a Seamless Fabric Across the Enterprise Expectations of enterprise automation capabilities have evolved over time. Today automation plays an active role in achieving business</description>
      </item>
      <item>
         <title>CA Automic Release Automation Environment Blueprints for All Application Landscapes</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/ca-automic-release-automation-environment-blueprints-for-all-application-landscapes</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/ca-automic-release-automation-environment-blueprints-for-all-application-landscapes</guid>
         <pubDate>November 20, 2017</pubDate>
         <description>How can CA Automic Release Automation deliver environments for all of your applications? Modern IT exists in the cloud. At least this is what most DevOps and continuous delivery pundits would have us believe. I would argue, however, that modern enterprise IT exists in on-premises environments as well, even though most of the application architectures are cloud native €“ especially micro service architectures. Indeed, I can no longer recall the last time I was at an organization that stacked and racked physical hardware and then deployed application artifacts to it. Physical hardware exists within modern IT, but as we all know Virtual Machines (VMs) provide a layer of abstraction to it. This on-premises abstraction layer has become known as a private cloud. Externally leased virtual infrastructure is known as the public cloud. Now, these opening remarks are perhaps stating the obvious, but what I find less obvious are the associated costs of continuous delivery in the cloud - private, public or hybrid. Over time, VM sprawl eats away at budgets, and there can be great trepidation over decommissioning a VM or cloud environment because the consequences are often unknown and therefore potentially disastrous. Many organizations I visit simply &quot;tag&quot; these VMs or environments as snowflakes and pay the money to lease more external infrastructure or add a few more racks to their private DC. Environment blueprints can help Gartner has said that Application Release Automation, or ARA, tools are an essential part of enabling DevOps. I believe this to be true and I also believe that ARA tools, such as CA Automic Release Automation, that address the snowflake problem can provide significant cost savings €“ above and beyond the benefits promised by DevOps and continuous delivery. CA Automic Release Automation allows IT to provide and use just-in-time (JIT) environments €“</description>
      </item>
      <item>
         <title>Container Monitoring 101 - CA APM Podcast Series</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/container-monitoring-101-ca-apm-podcast-series</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/container-monitoring-101-ca-apm-podcast-series</guid>
         <pubDate>March 13, 2018</pubDate>
         <description>In this podcast, CA's Amy Feldman, Director of Product Marketing and Andreas Reiss, VP of SWAT Innovation will be covering the challenges of container monitoring and the key factors that should be considered for a successful monitoring strategy. Please have a listen and share your thoughts with us in the comments.




Learn more about how CA does Docker Monitoring.

 
</description>
      </item>
      <item>
         <title>Common Container Performance Issues and How to Fix Them</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/common-container-performance-issues-and-how-to-fix-them</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/common-container-performance-issues-and-how-to-fix-them</guid>
         <pubDate>February 19, 2018</pubDate>
         <description>Containers enable a powerful DevOps approach to development. They allow developers to assemble software into easily deployable containers that perform consistently across development and production environments. Small and lightweight, containers use fewer resources than virtual hosts. They start, stop and migrate across servers quickly, and help break down monolithic applications into smaller components in a microservices architecture. However, not everything about containers makes life easier. Containers can introduce new challenges, such as subtle performance issues. In this article, I review some container performance challenges that I've faced, and explain how I worked to resolve or avoid them. Viewing Containers as &quot;Black Boxes&quot; Containers take black box development and testing to the extreme, and tend to be overlooked in terms of code reviews, internal component monitoring, or even higher-level design reviews. Each step provides a chance to ensure performance needs (i.e. SLAs) are met, to identify potential performance issues and bottlenecks as soon as possible, and improve performance before containers are deployed. The solution is mostly cultural and process-related. Even though a container conveniently encapsulates some area of production functionality, be sure to subject it to design and code reviews along the way. Overlooking Stress Testing Containers support easy, quick, and dynamic spin-up and server migration, elastically scaling to meet user and system needs on demand, often in cloud environments that further support this. This can lead developers and even QA managers to believe stress testing isn't needed-the container management tool will simply spin up additional instances. A corollary is a lack of testing in varying environments, either across providers or across multiple offerings from a single provider or datacenter. The solution, again, is mostly cultural and process-related: Use tools to simulate large numbers of users in both random and set scenarios with patterns of usage. Don't forget to look beyond</description>
      </item>
      <item>
         <title>Securing Database Communication in CA UIM 9.0.2 with TLS v1.2 (Oracle)</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/securing-database-communication-in-ca-uim-9-0-2-with-tls-v1-2-oracle</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/securing-database-communication-in-ca-uim-9-0-2-with-tls-v1-2-oracle</guid>
         <pubDate>February 6, 2019</pubDate>
         <description>CA Unified Infrastructure Management 9.0.2, a leading infrastructure management monitoring solution provides enhanced security by supporting Transport Layer Security (TLS) v1.2. In a past article, we shared how this works with Microsoft SQL Server. This article will focus on how you can support TLS v1.2 with Oracle, without compromising on the product performance. At a high level, enabling TLS v1.2 support in CA UIM 9.0.2 is a two-step process: Perform configurations on the Oracle database server. Enable the TLS option and provide relevant details during installation of the UIM Server. Supporting TLS v1.2 on Oracle The following diagram shows the steps that are required to enable TLS v1.2 when the UIM database is Oracle 11g or 12c: Configurations on the Database Server-Oracle Perform the following tasks on the database server-Oracle: Verify the FQDN System Requirement.Verify that your full computer name is FQDN (for example, VI02-E74.ca.com). If not, add the domain name (for example, broadcom.com) to the computer name. Verify and Apply Patches for Oracle. For Oracle 11.2, which does not support TLS v1.2 by default, download and install the 11.2.0.4.2 DBPSU patch and p25874796_112040_MSWIN-x86-64 from Oracle Support. Disable Previous Certificates. Change the registry keys to disable all the previous versions of certificates on the database server. Perform Wallet Configuration for the Server: Use the Oracle Wallet Manager user interface or the orapki utility (command line) to perform the wallet configuration for the server, which includes the following tasks: Create a server wallet. Enable auto-login to true. Create a certificate request. Export the certificate request into a file and send it to Certification Authority (CA). Get the certificate from CA. Import the user certificate into the server wallet. Perform Wallet Configuration for the Client. Use the Oracle Wallet Manager user interface or the orapki utility (command line) to perform wallet configuration</description>
      </item>
      <item>
         <title>What's the Future of the Mainframe? Find Out What Customers Think is Next for the Platform.</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/what-s-the-future-of-the-mainframe-find-out-what-customers-think-is-next-for-the-platform</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/what-s-the-future-of-the-mainframe-find-out-what-customers-think-is-next-for-the-platform</guid>
         <pubDate>June 6, 2018</pubDate>
         <description>The digital enterprise operates in an always-on multi-cloud world, where flexibility is key. Mainframe are an integral part of this architectural vision. To that end, CA Technologies, as part of our membership in the Open Mainframe Project, engaged in a study, entitled 2018 State of the Open Mainframe Survey Report, which sought to understand buyers' perceptions of the mainframe and more specifically Linux ncluding the future of the platform, common myths, and barriers to success. Despite the changing nature of the always-on digital world, the mainframe remains a consistently reliable, scalable, and secure platform; and it stacks up. As noted in the report, &quot;The Linux kernel powers everything from IoT devices to the highest performing supercomputers. Linux Foundation research reveals that, as of 2017, the Linux operating system runs 90 percent of the public cloud workload, has 62 percent of the embedded market share, and 99 percent of the supercomputer market share.&quot; Not too shabby. To start, where does the mainframe fit in the grand scheme of things? There is plenty of chatter about multi-cloud, hybrid and private cloud infrastructures, but that is not to say that the platform is being edged out. At the end of the day, organizations look to put the right workloads on the right platform, and increasingly hybrid cloud is becoming the preferred architecture to drive innovation. In fact, mainframes fit right into the hybrid and private cloud infrastructure dynamic with the vast majority of survey respondents considering the cloud an augmentation to, and not a replacement for, the mainframe. The consensus on the subject is that a cloud environment is neither as securable nor as great a value for the cost as the mainframe. Ideally, businesses should leverage the best of both computing worlds drawing upon their strengths and supplementing their weaknesses to optimize</description>
      </item>
      <item>
         <title>Bridging the gAPP</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/bridging-the-gapp</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/bridging-the-gapp</guid>
         <pubDate>July 1, 2018</pubDate>
         <description>How to make traditional enterprise apps agile and bleeding-edge apps reliable and compliant In the digital era, a lot of companies have placed unwavering trust in microservice architectures as they build their new customer facing apps. To run and manage these applications, they invest a lot of effort in getting Kubernetes, OpenShift and/or other container management platforms production-ready. However, in reality, these apps are only the tip of the iceberg. There are still a lot of traditional, old-school apps and IT systems out there which have to serve as a back-end and run hand-in-hand with new world technologies. Although containerization simplifies technical deployment routines and activities, there are still a myriad of further elements to a release that need to be considered. Barriers and obstacles can arise when any of the following are not addressed and/or integrated into the continuous delivery pipeline: Approvals Change management and ITIL Release management: dependencies to other applications and services Quality: more (automated) tests across services Shift-left: faster feedback to development Compliance: need to provide full audit trails end-to-end, and set up role-based access control Therefore, enterprises need to establish a continuous delivery solution that allows improved time to market and facilitates agility within more traditional legacy apps. Correctly implemented continuous delivery results in shorter feedback loops and better visibility, which in turn leads to improved quality. Therefore, when undertaking a continuous delivery initiative, the enterprise needs a solution that removes friction and barriers to execution via a number of competencies. End-to-End Value Delivery Ideas only matter when they are in the hands of users producing value. Therefore, it is vital to manage complex value streams with multiple apps and dependencies, always knowing the business impact of the pipeline. This ensures that innovations reach users in a rapid, predictable manner, with clear visibility of progress</description>
      </item>
      <item>
         <title>Securing Database Communication in CA UIM 9.0.2 with TLS v1.2 (Microsoft SQL Server)</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/securing-database-communication-in-ca-uim-9-0-2-with-tls-v1-2-microsoft-sql-server</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/securing-database-communication-in-ca-uim-9-0-2-with-tls-v1-2-microsoft-sql-server</guid>
         <pubDate>January 14, 2019</pubDate>
         <description>In today's highly competitive environment, it is hard to find a software application that is successful without having any robust security mechanism. With newer security threats coming up every single day, organizations need to have a continued focus on enhancing security in their software applications if they want to be market leaders. Without the ability to meet the ever-increasing security demands in their products, organizations cannot capture the market or do business with various institutions (e.g. federal or financial institutions). Organizations that can secure their applications without compromising on performance are bound to edge out competitors who cannot. CA Unified Infrastructure Management 9.0.2, a leading infrastructure management monitoring solution, comprehensively addresses both security and performance areas. It provides enhanced security by supporting Transport Layer Security (TLS) v1.2 while communicating with the CA UIM database--Microsoft SQL Server. This support enables the CA UIM Server to establish secure communication with the CA UIM database without compromising on the product performance. Various probes have been enhanced so that they can now communicate in a TLS v1.2-compliant CA UIM 9.0.2 environment. At a high level, enabling TLS v1.2 support in CA UIM 9.0.2 is a two-step process. We recommend that you backup your database before you start the process explained in this article: Perform configurations on the Microsoft SQL Server database server. Enable the TLS option and provide relevant details during installation of the CA UIM Server. Supporting TLS v1.2 on Microsoft SQL Server The following diagram shows the steps that are required to enable TLS v1.2 when the CA UIM database is Microsoft SQL Server 2012, 2014, 2016, or 2017. Configurations on the Database Server-Microsoft SQL Server Perform the following tasks on the database server-Microsoft SQL Server: Verify the FQDN Requirement. Verify that your full computer name is FQDN (for example, VI02-E74.ca.com). If</description>
      </item>
      <item>
         <title>The CA Automation Marketplace</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/the-ca-automation-marketplace</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/the-ca-automation-marketplace</guid>
         <pubDate>June 17, 2018</pubDate>
         <description>Why It's Crucial to an Automation Center of Excellence We are proud to host the world's first and largest automation marketplace dedicated to business automation. It has been built for customers, staff and consultants to share their plugins and extensions to the CA Automic One Automation platform. We have designed the marketplace to provide a forum that enables engagement and collaboration within our community; to share developments and pick up on trends that could ultimately evolve into future releases. However, the CA Automation Marketplace is more than a purely transactional destination-it is also a social one. The CA Automation Community, including developers, architects and evangelists, contributes their knowledge, expertise and experience-as do thousands of our users, from newcomers to veteran automation innovators. So, if you want to discuss certain topics or challenges that you are facing, you can join the community to see what others are saying! Why is the CA Automation Marketplace Useful? Why you might use the Marketplace differs on a person-to-person basis. Just like a marketplace in the real world, you might be there for business purposes, or simply for fun. With plugins ranging from big data to containerization, both community-authored and CA-supported, you might be visiting to download a plugin or to share something you've made from scratch. There are several key features that make the marketplace integral to any automation initiative that your organization is looking to implement: A central location to download or contribute plugins to drive automation for enterprises Access to request new solutions Integration with the CA Automation community to provide ratings, reviews and feedback on existing plugins Statistical insights and trends on downloads and searches Marketplace vaults Establishing the Automation Center of Excellence In order to maximize the potential of the Marketplace, we encourage users to upload and share their own</description>
      </item>
      <item>
         <title>Growing Globally with CA API Management: Q&amp;A with T5 Systems</title>
         <link>https://www.broadcom.com/sw-tech-blogs/api/growing-globally-with-ca-api-management-q-a-with-t5-systems</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/api/growing-globally-with-ca-api-management-q-a-with-t5-systems</guid>
         <pubDate>November 2, 2017</pubDate>
         <description>I caught up with CA World presenter, Onur Fenar, to hear how T5 Systems is expanding from a local market leader to a global leader in integration and middleware, through implementing API Management solutions from CA Technologies. Our conversation CO: Hi Onur! Thanks for taking the time to speak with me today. To start, can you give us a bit of background about yourself and founding T5 Systems? OF: Sure, thanks for chatting with me. After working in different roles for many international software vendors, I believed that it was the perfect time to start my own business. Founding T5 Systems was one of the most important decisions I made in my life. CO: For those who may be unfamiliar with T5 Systems, what business challenges do you solve for clients? OF: Before starting up the company, we coined T5 Systems as “The Integration Company,” which has served to explain in a few words what we do best for clients. We primarily engage in consulting and partnership activities around the implementation of new generation and integration software. CO: You often work with partners in your engagements; what challenges drove you to seek out the CA API Gateway solution in particular? OF: Certainly, we work a lot with partners. But we had a significant challenge to find the perfect API solution to use in our integration projects. We were familiar with the CA API Gateway as Layer 7 prior to its acquisition by CA, so we knew from the beginning that we would implement the CA API Gateway and related components in our business. The selection process was simple. CO: Once you decided to work with CA, what business objectives were you hoping to accomplish from implementing the CA API Gateway? OF: As a company, we have been very focused on</description>
      </item>
      <item>
         <title>It's Here and Ready: Securing the Connected Mainframe</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/it-s-here-and-ready-securing-the-connected-mainframe</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/it-s-here-and-ready-securing-the-connected-mainframe</guid>
         <pubDate>March 26, 2017</pubDate>
         <description>Security and compliance trends you need to know now to protect your mission-essential assets. I often say that data is the most regulated artifact in the application economy. Think about it, your assets are your data - whether it be sensitive employee records or financial transactions - it's what your business runs on. And when the data isn't being managed the way it should, the business suffers in the form of a breach or failure to comply with an audit. The reality with data is that the mainframe stores the majority of all corporate business data globally. And the other reality is, not many security executives know where all of their sensitive data is - let alone are proactively able to secure all of it. Not to mention, new waves of industry regulations, hacking and ransomware, and machine learning are influencing security models moving forward. When I think about enabling our customers to accelerate their ideas into real outcomes through our mainframe security and compliance solutions, there are a few trends I think we as security professionals need to keep an eye out for. Attention on data security and compliance management is increasing As the mainframe interconnects with everything else in your business (into the Internet of Things and beyond), the focus of its security shifts - and it's all around data security and compliance management. Consider this: there are 400 mainframes connected to the internet worldwide and accessible to anyone with a login screen, while simultaneously the mainframe processes 2.5 billion transactions per day. With mainframe data growing at an exponential rate and moving off the platform, the risk for accidental data disclosure and malicious data breaches is growing exponentially as well. Then think about compliance - industry regulations across verticals are increasing the need for controls around privileged</description>
      </item>
      <item>
         <title>The Importance of Big Data Automation</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/the-importance-of-big-data-automation</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/the-importance-of-big-data-automation</guid>
         <pubDate>December 14, 2017</pubDate>
         <description>Big data automation must be part of your survival kit in the digital age For better or worse, big data has irrevocably altered the digital landscape. The explosion in variety, velocity, volume and value of information presents an abundance of previously unimaginable opportunities, but it also creates a number of challenges that need to be successfully navigated. Big data automation can mean the difference between harnessing business insights at speed or getting buried in your data. This reshaped technical world poses the following question to organizations: do you risk presenting, stale, incorrect or erroneous data to your customers? Because, with 2.5 quintillion bytes of data now being created every day, finding a way to manage and harness such potential is a new experience for everyone. And if you don’t take advantage, your competitors will. Vast Amounts of Unstructured Data By now we are mostly well versed in the necessity for and the advantages of big data; it can be used to improve decision making, get a better understanding of our customers, enhance security, increase results or augment your pre-existing data warehouse capabilities. This new wave of data consumption is now ubiquitous and the accessibility has given businesses the opportunity to streamline internal and external processes. The way information can be discovered, stored and distributed has been significantly enhanced, and will shape an organization’s understanding – enabling it to be far more competitive within the market. Big data is giving everyone the opportunity to make faster and more informed decisions, recognize new revenue streams and deliver highly personalized customer experiences at massive scale. While this could be a game changer for your organization, there are nonetheless certain hurdles to overcome. Vast quantities of unstructured data now reside in data lakes; everything is here, but comprehending it is quite a different matter.</description>
      </item>
      <item>
         <title>Building A Bridge To True Cross-Enterprise Development</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/building-a-bridge-to-true-cross-enterprise-development</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/building-a-bridge-to-true-cross-enterprise-development</guid>
         <pubDate>April 2, 2018</pubDate>
         <description>Imagine if you could manage mainframe processes with the same agility as the typical cloud environment, delivering a seamless, simple, and secure user experience. Turning that dream into reality would fundamentally redefine how enterprise development and operations professionals interact with and work on the mainframe. Due to three driving factors, it is now mission-critical to empower the people at the center of the mainframe with a new set of tools to create a modern user experience: Businesses need to make mainframe development attractive and accessible for a new generation of developers. Businesses are experiencing a 'double whammy': experienced mainframe developers are retiring and the generation replacing them wants to ramp up quickly on the mainframe – without acquiring deep mainframe expertise. Consequently, there's an urgent need for businesses to rethink application development for the mainframe. Mainframe needs to become part of the enterprise DevOps initiative. Line of business teams are increasingly adopting DevOps principles, but often struggle to integrate mainframe development into their existing delivery pipeline. This could make the mainframe a significant bottleneck. Businesses therefore need to revise their DevOps processes to support mainframe applications. Near 'zero-touch' and 'zero-cost' development and testing environments on the mainframe are increasingly in demand. The complexity of creating dev/test environments means they are often maintained long after they are needed, creating friction between the testing and development functions. Businesses need to address the time and resource demands associated with provisioning dev/test environments. Let's take a closer look at each of these factors and explore what the corresponding mainframe toolset looks like and the benefits these tools deliver. Making Mainframe Development Attractive and Accessible Many organizations are facing a generational shift in their workforce. Mainframe experts are retiring, ceding responsibility for mission-essential applications to a new generation of developers. These modern developers are a</description>
      </item>
      <item>
         <title>New Year’s Resolutions for Your Network Monitoring Software</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/new-year-s-resolutions-for-your-network-monitoring-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/new-year-s-resolutions-for-your-network-monitoring-software</guid>
         <pubDate>January 6, 2019</pubDate>
         <description>It's that time of year again. The time we all put together a list that betters ourselves. Whether it's exercise, quitting a bad habit or just being a little more positive in our daily lives; we all strive to improve many aspects of ourselves. It's also never a bad idea to continually look to improve how IT operations run; specifically our network monitoring software. It's a new year, so let's see what 2019 and CA and Broadcom have in store for you to improve your network monitoring software strategies and deployments: Unify Your NetOps Visibility Today's enterprise relies on highly available network infrastructure. This increasing complex mix of traditional and modern architectures warrants the network engineer to be a &quot;Jack of all trades and master of many&quot;. Three facets of the networks that should be top of the mind for these engineers are &quot;fault&quot;, &quot;performance&quot; and &quot;flow&quot;. Network engineers and architects must be comfortable and even experts at looking into these three areas of the network stack within a single experience. Every aspect of fault, performance and flow in the network has to be understandable and relatable with adequate depth and correlation for the engineers to arrive at a quick resolution to ensure the application and customer experience. Be it connectivity with AWS, cloud-based Wi-Fi in the last mile, SD-WAN in data centers or traditional WAN connectivity, the CA Network Operations Analytics portal brings together Fault, Performance and Flow across traditional and modern networks and improves time to value; while making the engineers adept in dealing with network wide scenarios. Assure Your Software-Defined Deployments SDN technologies may be a little more mainstream than they were years ago but that doesn't mean your traditional network is going away any time soon either. That's why it is so important that we do</description>
      </item>
      <item>
         <title>Building a Data Warehouse to Guarantee Speed, Agility and Reliability</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/building-a-data-warehouse-to-guarantee-speed-agility-and-reliability</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/building-a-data-warehouse-to-guarantee-speed-agility-and-reliability</guid>
         <pubDate>April 30, 2018</pubDate>
         <description>Why automation prevents you getting bogged down in expensive and inefficient complexity when building a data warehouse Can your organization wait for critical insight? Is competitiveness something you're willing to relinquish? Will you give up speed and agility rather than streamline the complex interdependencies between the different parts of your technology stack? Ensuring the quick and reliable delivery of business-critical information to users is vital in today's ultra-competitive marketplace, and requires building an efficient, dependable enterprise data warehouse. However, coordination issues between file transfer tools, databases and applications can lead to errors, delays and inaccuracies-challenges that can only be overcome with effective data warehouse automation. What is a data warehouse? A data warehouse is a storage facility for structured data from a range of sources, which can convert data into a usable, unified format and prime it for analysis. Data warehouses allow data to be quickly and easily cleaned, consolidated, interrogated and analyzed. Essentially, they organize your data so it can better answer questions. Data warehouses have uses in practically every industry. For example, a retail company could collect data from consumer orders, customer shipments and customer payments from multiple entities, applications and databases. They could then feed this into a data warehouse to get analytics and run reports such as customer demographics or the length of time from quote to payment. Often, the company will then choose to create multiple data marts from the warehouse, which provide access to different subsets of the data to the users in the company who need them. How do we build a data warehouse? Before building a warehouse, it is vital to understand the types of questions the users will be asking of it. This will enable the system to pre-sort the data and deliver information quickly. The warehouse can then be set</description>
      </item>
      <item>
         <title>Monitoring Software for Dynamic Multipoint VPN Tunnels</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/monitoring-software-for-dynamic-multipoint-vpn-tunnels</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/monitoring-software-for-dynamic-multipoint-vpn-tunnels</guid>
         <pubDate>March 7, 2018</pubDate>
         <description>Do you know? You can monitor dynamic multipoint virtual private network tunnels with CA Spectrum’s monitoring software. Organizations that have multiple locations – banks, insurance agencies, retail stores, healthcare providers, etc. – are looking at new software-defined WAN technology to increase their options for transport between locations while providing the security that traditional leased-lines offer. By increasing their options beyond traditional MPLS and Frame Relay links to include both the public Internet as well as new 4G LTE networks, organizations can also lower their transport costs. A dynamic multipoint virtual private network (DMVPN) is a secure network that exchanges data between sites and intelligently balances traffic over multiple WAN routes. DMVPN is a popular solution for organizations requiring encrypted WAN connectivity between remote sites. WAN connection costs are lower when the public Internet is used to replace or provide backup for private leased lines and Frame Relay links. Cisco’s SD-WAN solution, Cisco Intelligent WAN (Cisco IWAN), uses a prescriptive, transport-independent design based on DMVPN. Do you know you can monitor DMVPN using CA Spectrum’ monitoring software? Monitoring the health and availability of DMVPN networks with CA Spectrum is a critical function if your SD WAN solution includes policies that allow for fluctuating routes for mission critical applications. CA Spectrum will also help organizations understand the load on the network from business applications, ensure policies are in effect for business critical applications and evaluate bandwidth, resource utilization and capacity planning. Figure: CA Spectrum NetOps topology view of a Cisco IWAN deployment. From a network topology perspective, one can make use of the CA Spectrum VPN Manager to provide VRF based topological representation of Layer 3 VPN sites, as depicted above in the topology view of a Cisco IWAN deployment using DMVPN tunnels. This monitoring software network topology gives visualizations into connectivity</description>
      </item>
      <item>
         <title>Innovation in a B2B World: Beyond Analysts, Escalations and Sales Calls - Rally Software®</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/innovation-in-a-b2b-world-beyond-analysts-escalations-and-sales-calls-rally-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/innovation-in-a-b2b-world-beyond-analysts-escalations-and-sales-calls-rally-software</guid>
         <pubDate>May 16, 2018</pubDate>
         <description>If you’re building your roadmap from analyst interactions and sales calls and constantly steering in escalation work, you’re going to fail. With today’s rate of change and disruption, you cannot afford to sit back and wait for innovation to fall in your lap. The inevitable death may not come today and it may not come tomorrow, but your product’s death warrant has been signed. Why is this a problem? You get yourself into feature wars You follow general market trends that EVERYONE is aware of Your product becomes disjointed from continually shifting priorities You only serve the ‘sexiest’ users or loudest complainers Usage and NPS decline as your product loses its purpose Four Steps to Ignite Innovation: 1. Leverage all of the channels of feedback available In a B2B world, to get a product to market, it takes a village. On the up side, this gives you many more people to gather intelligence from. If you haven’t done so already, create a channel for your sales, services, and support folks to provide you information about your customers and the market. You can do this through interviewing, setting up an Idea Management site, or just spending a day or week with them. One word of caution, remember to always ‘consider the source’ when evaluating input; work environment can lead to bias. That’s ok, just be aware.2. Interview, Interview, Interview 2. Interview, interview, interview There is nothing as valuable as a well run empathy interview. The ability to deeply understand the needs of your customer and truly put yourself into their shoes is remarkably valuable. Once you have done enough of these interviews within a given persona or segment, your intuition will become your guide. You will see through their eyes and be able to predict reactions. There are three types of</description>
      </item>
      <item>
         <title>How SAP Automation Brings Salvation to Digital Transformation</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/how-sap-automation-brings-salvation-to-digital-transformation</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/how-sap-automation-brings-salvation-to-digital-transformation</guid>
         <pubDate>June 7, 2018</pubDate>
         <description>Fully automated SAP processes can help you stay one step ahead of nimble digital disrupters. Digital transformation is everywhere. Whether it's reinventing online retail or upending the way broadcast media is consumed, make no mistake: your competitors are fast transforming their businesses to become digital enterprises. At the very least, organizations must now present self-service options for customers via web or mobile devices. Furthermore, many are increasingly required to open up business processes to other organizations and/or external devices via APIs and other means. These changes related to digital transformation require near-instant execution of key business processes, as opposed to periodic batch runs. So, where does SAP fit into all this? As a consequence of all-pervasive digital transformation, your SAP environment needs to transform too. SAP is one of the most mission-critical application suites on today's business front line, but the business processes that run on it extend beyond the SAP environment and run across other systems-which may be on-premises, in the cloud, or in hybrid environments. Automating in Real-Time To deliver the agility and flexibility needed to compete in this era of disruptive digital transformation, your organization needs a higher level of automation-much of it in near real-time. That's not easy. Organizations often find themselves unable to achieve complete process automation. Despite millions of dollars invested in SAP systems running millions of transactions, they cannot automate end-to-end. A key reason is that most end-to-end processes in large organizations involve both SAP and non-SAP applications, which requires orchestrating processes that span on-premises SAP, database platforms, cloud applications, external data sources and third-party infrastructure platforms. The complexity of this IT landscape introduces processing inefficiencies and creates technology silos-the bottlenecks to business agility. Moreover, extra complexity is introduced when process flows span multiple SAP clients or instances. Connecting the Silos What's missing?</description>
      </item>
      <item>
         <title>24x7 Security and Access Management with CA API Management: Q&amp;A with Broadridge Financial Solutions - Layer 7® API Management</title>
         <link>https://www.broadcom.com/sw-tech-blogs/api/24x7-security-and-access-management-with-ca-api-management-q-a-with-broadridge-financial-solutions-layer-7-api-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/api/24x7-security-and-access-management-with-ca-api-management-q-a-with-broadridge-financial-solutions-layer-7-api-management</guid>
         <pubDate>November 9, 2017</pubDate>
         <description>I caught up with CA World presenter, Jeffrey Klein, to learn how DevOps and a microservices architecture enabled by CA API Management solutions have eliminated post-deployment outages and fueled integration and scale at Broadridge. Our conversation CO: Hi Jeffrey! Thanks for taking the time to speak with me today. To start, can you give us a bit of background about yourself and how you got into your current role at Broadridge? JK: I came to Broadridge Financial Solutions, Inc. from a medium-sized business where I was responsible for the full stack and managed the migration from legacy to modern ERP. I’ve been in a cross-functional role within the SSO-IdM group [Single Sign On-Identity Management at Broadridge for a little over three years, as both lead business analyst and also contributing in a technical capacity to CA Single Sign-On (SSO) development efforts. CO: What are some of the challenges Broadridge is currently facing around API management? JK: As API usage within the enterprise has grown, so too has the need to secure access to those APIs. The IAM [Identity/Access Management] team began to receive more and more requests to leverage CA SSO in some capacity to protect Broadridge’s APIs. CO: What led you to implement CA API Management in addition to CA Single Sign-On? JK: We needed a more flexible solution with a broader feature set that was simple to deploy and scale. CA SSO is the security solution on which we’ve standardized internally, so simple integration with CA SSO was a key driver behind the decision to go with CA API Management. We also needed reliable support for a tool that will need to provide 24×7 security to multiple Tier-1 applications around the globe. CO: What business objectives were you hoping to accomplish through working with CA? JK: We sought</description>
      </item>
      <item>
         <title>What Is Zero-Touch Infrastructure Monitoring and Why Is It Important?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/what-is-zero-touch-infrastructure-monitoring-and-why-is-it-important</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/what-is-zero-touch-infrastructure-monitoring-and-why-is-it-important</guid>
         <pubDate>April 5, 2019</pubDate>
         <description>From VMs to containers to cloud, IT infrastructure is getting more and more virtual, dynamic, and abstract. System administrators tasked with monitoring such diverse IT infrastructures are increasingly looking to automate the provisioning and monitoring process. Traditional monitoring requires them to log into every device to deploy/configure the monitoring agent. This manual and repetitive task is error prone and time consuming, and any mistake could result in a critical monitoring outage. We are continuously working to build a monitoring solution that brings faster time to value with minimal administration, which is why we are constantly evolving our support for zero-touch setups in CA Unified Infrastructure Management. There are many benefits of zero-touch monitoring, including: Zero monitoring loss, from when the system is provisioned until it is decommissioned Minimal configuration errors Reduced cost More time for strategic initiatives Support for full process automation How Does Zero-Touch Infrastructure Monitoring Work? This is how you can set up zero-touch monitoring in CA UIM SaaS: The first step is to create a Dynamic Group to monitor filtered devices that are getting discovered post device discovery The next step is to choose and create a monitoring profile using the newly discovered/onboarded devices With a Monitoring Configuration Service (MCS) profile, a corresponding CA default alarm policy is automatically created out-of-box that contains our recommended threshold values The next phase is the Device Discovery, where we learn about a new device being added in the computing environment. There are multiple ways that CA UIM can learn about a newly added device: Our discovery agents run network device discovery by scanning IP ranges recursively at the scheduled day and time. A new virtual machine is created in a VMware Center or a new server instance is created in a public cloud environment like AWS or Azure Optionally, you</description>
      </item>
      <item>
         <title>15 Project Portfolio Features You Need (Part 2)</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/15-project-portfolio-management-features-designed-to-help-your-key-business-initiatives-succeed-part-2-clarity-ppm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/15-project-portfolio-management-features-designed-to-help-your-key-business-initiatives-succeed-part-2-clarity-ppm</guid>
         <pubDate>August 14, 2018</pubDate>
         <description>The second in a three-part series on how CA helps businesses solve their most pressing issues with modern project portfolio management features. At the end of 2015, CA Technologies conducted a research study designed to gain a deeper understanding of how the CA Project &amp; Portfolio Management (CA PPM) solution could improve project portfolio management, not only for PMOs but also for product, resource and financial managers, as well as senior executives. A key theme of the study's results could be characterized as €œclarity and innovation through simplification.&quot; A simplified, modernized tool provides decision makers with greater visibility. And visibility combined with simplicity helps drive innovation. The study results have acted as a guide for CA PPM development that has resulted in dozens of improvements, five of which are detailed below. 6. Roadmapping for simplified project portfolio management Traditional investment planning is too cumbersome. When users are required to articulate too many project details (features, budgets, architectural plans, team allocations) at too granular a level just to get started, you run the risk of ending up with no meaningful planning at all. CA PPM's roadmap feature serves as a communication tool that allows users to earmark funding and work cycles without having to detail discrete capabilities. Stakeholders can still view and sort investment data to get a clear picture of proposed projects, including how they complement existing projects and impact current investment allocations, but the need to input exhaustive project details is removed. 7. Improving project portfolio management through intuitive staffing Staffing information should be at your fingertips. Many vendors offer search tools meant to help resource managers understand how workers are allocated and find the perfect candidate for their projects. But too many variables and extenuating circumstances render most search filters ineffective. CA PPM provides one consolidated view of</description>
      </item>
      <item>
         <title>The modern PMO provides insightful business intelligence - Clarity PPM</title>
         <link>https://www.broadcom.com/sw-tech-blogs/clarity/the-modern-pmo-provides-insightful-business-intelligence-clarity-ppm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/clarity/the-modern-pmo-provides-insightful-business-intelligence-clarity-ppm</guid>
         <pubDate>September 21, 2017</pubDate>
         <description>CA recently partnered with TechValidate to perform a research study that would help us to understand how organizations are leveraging our CA Project &amp; Portfolio Management (CA PPM) solution to evolve their businesses. The study validated the changing role of the modern PMO. When asked, “Who typically consumes your reports?” 95 percent of respondents pointed to managers, while 60 percent also included VPs. Nearly half said their reports made their way up to the C-level. It’s clear that people across the organization—from the resource manager to the VP of product development to the C-level evaluating business opportunities—are using data collected and distributed by the PMO to make strategic business decisions. While the work done by today’s modern PMOs guides the entire organization, the main focus is increasingly on providing executive leadership with meaningful insight around the work being delivered through both predictive (traditional and waterfall) and adaptive (agile and hybrid) innovation methodologies. Getting timely data to decision-makers enables business agility and helps ensure that the work being delivered will bring the most value to customers and the company. Following are five tips to providing insightful business intelligence: Track and extract the right information Real business insight is provided through the collection of key metrics. The right metrics allow business leaders to evaluate initiatives, identify issues, implement necessary changes and then reevaluate. For this, PMOs need a tool that employs a comprehensive approach to data. Unlike most data warehouses that are built with a primary focus on incorporating new data, CA PPM was designed for the easy extraction of information users need for effective decision-making. Capturing the right data is imperative, but it must be quickly and easily accessed to bring real value. Be discerning when it comes to what information is shared While the ability to capture and extract data</description>
      </item>
      <item>
         <title>The Modern PMO Focuses on Meaningful Results</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/the-modern-pmo-focuses-on-meaningful-results-clarity-ppm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/the-modern-pmo-focuses-on-meaningful-results-clarity-ppm</guid>
         <pubDate>September 4, 2017</pubDate>
         <description>Last month, I wrote that the role of the PMO had changed. Over the last decade it has evolved from providing strategic investment guidance, managing budgets and monitoring high-level execution, to a much more tactical role focused on waterfall execution and Gantt charts. The trend, as I explained, didn't pan out for most organizations and today, the PMO is transitioning back into a strategic role focused on portfolios over individual projects, and identifying the right initiatives at the right time, executed by the right teams. But the PMO isn't moving away from tactical execution entirely. It's simply expanding its scope and shifting its main focus to business results. This, of course, makes the ability to view the business at ground level as well as from 35,000 feet essential. Only from this dual vantage point can the PMO help implement investment controls that tie project execution and delivery to budgetary constraints, governance and an outcome that brings value to the portfolio and the organization. For this degree of visibility, PMOs must have the right tools. That's why the integration between CA Agile Central and CA Project &amp; Portfolio Management (CA PPM) is proving invaluable to customers. CA Agile Central allows PMOs to monitor -and to a degree orchestrate - work happening at the project level. CA Agile Central also shares real-time information with CA PPM where it's combined with pertinent financial information to provide the intelligence necessary to make strategic, data-driven decisions. The strategic PMO starts with results The right tools are essential in supporting the strategic responsibilities of the PMO. But those tools are a lot more effective for the PMO that already has the right mind-set and the right approach: the PMO that starts with the desired results and works backwards, mapping out how the company will achieve them.</description>
      </item>
      <item>
         <title>Clarity PPM Modern Business Management: An Economic Necessity</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/clarity-ppm-project-portfolio-management-modern-business-management-an-economic-necessity-modern-business-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/clarity-ppm-project-portfolio-management-modern-business-management-an-economic-necessity-modern-business-management</guid>
         <pubDate>August 31, 2017</pubDate>
         <description>If you ask any executive whether their organization is modern, they'll tell you of course it is. No leaders want to admit they are leading something that is a little behind the times, outdated or perhaps even obsolete. However, if you ask those same leaders what it is that makes their organizations modern they'll struggle to provide tangible examples of their modernity. At CA we believe that's a problem, and we also believe modern business management can be defined. Business today is constantly evolving and adapting, technology and customer demands are driving the need for ever faster responses to challenges, and organizations that cannot pivot immediately to leverage new opportunities will find those opportunities lost. That kind of business environment cannot be managed in the same way as the relatively stable business models of the past that were subjected to fairly significant shifts on only an occasional basis. Instead, organizations must manage for constant evolution — with much smaller changes on a much more frequent cadence — with the assumption that organizational stability is an outdated notion. Modern business management must therefore combine the concepts of business agility with rigorous portfolio management to ensure the value being delivered is always optimized. In practical terms this means a much closer integration between leadership and the frontline of project delivery. The amount of time it takes for decisions at the executive level to reach the project delivery level and be translated into action must be minimized, as must the time needed to get performance data translated into decision support information for leaders to leverage. This is where portfolio management truly delivers value, applying agile principles to improve organizational performance. Here are a couple of examples of that application in action: From a leadership-down perspective, portfolio management must be closely integrated with executive</description>
      </item>
      <item>
         <title>Financial Institution Customizes their Infrastructure Monitoring Solution</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/financial-institution-customizes-their-infrastructure-monitoring-solution</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/financial-institution-customizes-their-infrastructure-monitoring-solution</guid>
         <pubDate>March 19, 2018</pubDate>
         <description>I recently visited a financial institution and was surprised to see how they add their devices to CA Unified Infrastructure Management (CA UIM) groups which determined what monitoring profiles were applied. Yes, yes, I should get out more… But when a customer using an early version of Monitoring Configuration Service (MCS) with CA UIM, writes their own auto application infrastructure discovery it’s hard to ignore. Migrating into CA UIM This customer had, in the past, migrated from an existing CA monitoring product to CA UIM (Unified Infrastructure Management). All the monitoring was configured on each device by the old product, so they decided to use the old configuration files as a template for what should be monitored by CA UIM. This lead them to creating scripts to convert the old to the new, as well as using CA UIM SDK’s to add devices to specific CA UIM groups where the MCS monitoring was configured. Managing 1,000+ MCS Once all the existing devices were ported over to CA UIM, MCS had thousands of groups each one with specific set of monitoring profiles. The next issue they faced was provisioning new devices and how to determine which groups each new device should be added to, before they had a template using the old configuration but now they had nothing. Each new device is provisioned with a robot installed as well as a request.cfg file, listing CA UIM packages to install from the archive when the robot starts for the first time. In this case they only need one custom package which must be in the archive. When the robot starts the custom package is requested, installed and executed. Interestingly, this custom package does nothing more than run a post install command which is their policy group manage script that read their policies.csv</description>
      </item>
      <item>
         <title>Strategic Roadmaps Let You Make Smart Decisions</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/strategic-roadmaps-support-value-oriented-decision-making-clarity-ppm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/strategic-roadmaps-support-value-oriented-decision-making-clarity-ppm</guid>
         <pubDate>August 2, 2018</pubDate>
         <description>In this blog, we will look at three concepts of strategic business roadmapping: A strategic roadmap serves as a basis for crafting a shared vision of the future. Focusing the portfolio on value allows for comparison between possible investments on different roadmaps. Roadmaps aid in sequencing all work across the portfolio because dependencies become clearer and can be better aligned. Used correctly, strategic business roadmaps can unlock entirely new project portfolio insights in a more agile fashion and in less time than most organizations currently spend on their annual planning process. Roadmaps More Easily Generate a Shared Vision of How a Strategy Gets Executed To many organizations, the classical hub and spoke model of communication is the easiest way to start the process of building a culture of bi-directional communication. If you look at Figure 1, the obvious problem with the hub and spoke model should leap right off the page. Even assuming perfect communication between corporate and the divisions, the model does not promote communication between the divisions. The Harvard Business Review article “Why Strategy Execution Unravels—and What to Do About It” pointed out that only 9 percent of managers say they can consistently rely on colleagues in other functions to help them get their strategic work executed. To solve this problem, we need move toward the star pattern shown in Figure 1. Figure 1: Corporate communication models Issuing a memo to all the divisions that reads “As of now, you are all responsible for talking to each other” would be a great simple solution if it worked. It doesn’t. What will do the trick is creating a clear picture of how mutual support benefits everyone. Business roadmapping can be used to highlight two very important issues. The first is the competition between day-to-day operations and strategy, and the</description>
      </item>
      <item>
         <title>It’s Back to Basics for the Modern PMO</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/it-s-back-to-basics-for-the-modern-pmo-clarity-ppm-project-portfolio-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/it-s-back-to-basics-for-the-modern-pmo-clarity-ppm-project-portfolio-management</guid>
         <pubDate>August 29, 2017</pubDate>
         <description>At one time, the core responsibilities of the PMO revolved around providing investment control and guidance, ensuring projects operated within budgets and monitoring high-level execution all within the larger context of portfolio management. But over the last decade or so, that role shifted, becoming largely associated with things like waterfall execution and the production of Gantt charts. Today, we've come full circle. Its become apparent that the trend was not in the best interests of most organizations. Waterfall execution and the management thereof is certainly key to project success. But supplanting the strategic components of the department in favor of tactical execution has proven a net negative for many organizations, even when the project was carried out flawlessly. It's pretty easy to understand why: Even if a project is perfectly executed, it won't benefit the organization as anticipated if it isn't the right project at the right time and at the right cost in the first place. And that is where the strategic PMO is restoring its focus€”identifying the right initiatives at the right time, executed by the right teams, all in relation to the other projects happening concurrently. PMOs return to strategic guidance, investment controls Today's PMOs are going back to creating viable, dynamic portfolios and steering the work within the organization towards those projects with the ability to deliver the most value. And by illustrating how specific projects can generate revenue and how rationalized execution can minimize resource requirements and save money, the PMO is as relevant as ever. To succeed, the PMO must have visibility into all the work happening across the organization€”to cut through the noise and help the organization implement investment controls and identify the best opportunities. When properly applied, business investment controls tie project execution and delivery information with the things the business cares</description>
      </item>
      <item>
         <title>PPM 101: Project management made easy</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/ppm-101-project-management-made-easy-clarity-ppm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/ppm-101-project-management-made-easy-clarity-ppm</guid>
         <pubDate>February 5, 2019</pubDate>
         <description>Multiple project tools, manual tasks and a never-ending demand for status reports are but a few of the many pains and aches of a project manager. Talking to Clarity PPM product manager, Brian Nathanson (PMP), we'll try to remedy some of these issues. Brian learned project management principles during several years at KPMG then actively applied those concepts for several more years as a PMI-certified Project Manager in software development at a boutique consulting firm in Reston, VA. He has a Master's in Technology Management through a program co-sponsored by the Wharton School, where he focused on advanced portfolio modeling and simulation techniques with special consideration for how such techniques can assist in the management of high-risk technology projects. Brian has worked with dozens of customers to apply financial portfolio principles and technology to the management of business portfolios. He has also conducted training in project management fundamentals at various conferences and spoken on a variety of topics at PMI chapter meetings. We started our discussion by asking Brian to complete the sentence: Clarity PPM improves the capability of a project managers by... &quot;Providing a common set of tools and as a result, a common set of practices that project managers can use so that regardless of where they are they know they can do the same thing,&quot; said Brian. &quot;It also allows them to have a common language in the organization whether they are project managers who come with experience, new project managers, or maybe a subject matter expert who got thrown in to being a PM.&quot; &quot;This consistent approach and language is critical, it creates a common framework that ensures PMs can focus on the challenges of their projects, not how they do the basics of the work.&quot; We explored this further with Brian by asking him: What</description>
      </item>
      <item>
         <title>Clarity PPM Modern Business Management: Being Real</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/modern-business-management-managing-reality-clarity-ppm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/modern-business-management-managing-reality-clarity-ppm</guid>
         <pubDate>October 18, 2017</pubDate>
         <description>When we talk about modern business management, we often mention the importance of connecting strategy and execution and how to optimize effective communication between the top-level decision-makers and the front lines of project teams where the work is happening. However, that communication channel must be two-way—effective management also requires timely information to flow from project teams to leadership to facilitate decision-making with the best possible information. With numerous projects underway at any given time, and with each of those initiatives operating in a dynamic environment where performance is constantly shifting, the focus must be on ensuring only information relevant to decision-making is getting to leadership, and that the information has an appropriate context. This is the heart of “bottom-up” modern business management and has several elements: Standards for communication of information: Regardless of whether projects are being executed using agile, hybrid or waterfall techniques, the information provided must be consistent to allow for effective decision-making and comparison across multiple initiatives. Filters for what is communicated: Not every decision on every project requires leadership involvement and anything that doesn’t contribute to effective decision-making creates noise and reduces the ability to make the right decisions. Effective communication channels: Information from project teams must get to the right decision-makers with the right background and analysis within a timely manner so that decisions have the best possible chance of not just being the right ones, but also made in time to be effective. Each of these elements requires an effective project and portfolio managementfunction operating to maintain and enhance the modern business management environment. Let’s look at each of them in more detail to see how that works. One of the most critical contributors to modern business management success is the ability to create standards for project assessment and decision-making that can be applied</description>
      </item>
      <item>
         <title>Clarity PPM Modern Business Management: The Skills You Need</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/clarity-ppm-modern-business-management-the-skills-you-need</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/clarity-ppm-modern-business-management-the-skills-you-need</guid>
         <pubDate>January 7, 2018</pubDate>
         <description>Modern business management is an evolution of how organizations deliver on strategy. It follows that the skills organizations need to deliver it must also evolve. Let’s consider the skills an organization needs to deliver an effective approach to modern business management—from the top, the bottom and as an overall environment. There are three key roles that must change for MBM to be successful. Let’s start with the most critical of all: leaders. The good news for organizational leadership is that the nature of the change they need to make is evolutionary rather than revolutionary. Leaders are already more focused on understanding how their organization’s operating environment is changing and what the implications of those changes are. The skills development they must focus on is the ability to concisely and effectively communicate that understanding to the project teams executing the initiatives that are impacted. Leaders tend to focus on empowering teams to make change-related decisions rather than trying to control those decisions themselves, but this isn’t really a skill. Instead it is a degree of self-discipline that allows others to drive the change, and while difficult, it is not a skill that must be learned. On the other hand, if leaders are not effective communicators, teams will be unable to make the right changes, because they won’t have the necessary information and context. Project managers will also find their skills evolving rather than revolutionizing. They are the recipients of the information and context from leaders, and they must focus on creating the empowered environment for their teams to drive decision making. This is an extension of the leadership skills they already have and should be fairly straightforward for most project managers. However, it will require patience and understanding, as team members will be experiencing more dramatic changes as they are asked</description>
      </item>
      <item>
         <title>Software Capitalization for PMOs: A Not-So-Quick Primer</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/software-capitalization-for-pmos-a-not-so-quick-primer-clarity-ppm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/software-capitalization-for-pmos-a-not-so-quick-primer-clarity-ppm</guid>
         <pubDate>September 4, 2017</pubDate>
         <description>As agile becomes a way of life for IT organizations, more PMOs and IT senior management have been revisiting their understanding of how to correctly implement the Accounting Standards Executive Committee’s Statement of Position (SOP) 98-1. Essentially, this SOP allows companies to capitalize internally developed software in the same manner as purchased application software. On the surface, the ins and outs of how to handle this SOP should be easy to understand, especially since, in theory, it mirrors an earlier ruling (FASB-86) on capitalizing software developed for sale. According to my reading of SOP 98-1, all the preparation and time spent on the front end of a software development effort should be treated as a period expense, and everything that happens after the development effort reaches production (bug fixes, etc.) is excluded from capitalization. When SOP 98-1 was first implemented, many organizations used their waterfall lifecycle to define when capitalization should begin and end. Today, with more organizations adopting business agility, insisting that waterfall methods are the only ones supported by SOP 98-1 is simply incorrect. But many organizations are struggling on where to draw the lines. A clear understanding of the original principles that underlie agile softwaredevelopment will quickly prove that applying the few stated guidelines of SOP 98-1 to agile should be simple. The envisioning phase is an expense, work done during the build/create phase should be capitalized and the post implementation phase should be expensed. I’ve glossed over a few corner cases and some details, but in principle, it really is that simple. So why is SOP 98-1 coming back on everyone’s radar after all these years? Like everything else in life, the answer is complex and includes politics, poor agile practices, horrendously bad resource management practices, EBITDA and, finally, what I refer to as “the smartest</description>
      </item>
      <item>
         <title>Make Continuous Delivery Easy with Model-Driven Releases</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/make-continuous-delivery-easy-with-model-driven-releases</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/make-continuous-delivery-easy-with-model-driven-releases</guid>
         <pubDate>September 27, 2018</pubDate>
         <description>New technology is always exciting, and something many of us can’t wait to experiment with. But what happens to our older systems—the ones that have been keeping the lights on all this time—when we rush off with the latest thing? They still power the business, but are often forgotten about. This can be particularly risky when we’re instructed by the top level to immediately implement the latest and greatest tech. So how do you align the two? If there is disconnect between an overarching strategy and the workers on the ground, teams drift apart and the scale and complexity of any orchestration or management becomes increasingly challenging. This is magnified the more rapidly technology advances and is becoming increasingly prevalent in areas such as cloud migration. Currently, deployments are often too closely tied to on-premises models and tooling, yet simultaneously the company in question will have a cloud strategy in place and have moved data centers there. This raises the question: how do you adapt for what you have today and bridge the gap for tomorrow? Indeed, a second question replaces the word ‘adapt’ with ‘protect’. In the modern era, security and governance are increasing important, but typically only added into development cycles late in the day. Axiomatically, this results in delays, rework and a lot of risk exposure. These same questions and challenges are popping up throughout the entire enterprise. The picture is further complicated by the array of apps and services that all work together and are spread across the entire business. Different teams will be all be using different technology stacks and tools across each and every environment. Thus finding a consistent, reliable approach that can account for—and indeed harness—this diversity is crucial for the modern enterprise. Model-Driven Releases CA Continuous Delivery solves these problems by using</description>
      </item>
      <item>
         <title>5 Tips to Speed up Environment and Server Provisioning</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/5-tips-to-speed-up-environment-and-server-provisioning</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/5-tips-to-speed-up-environment-and-server-provisioning</guid>
         <pubDate>April 23, 2018</pubDate>
         <description>Frustrated waiting around for environment and server provisioning? Here are a few tips to make your life easier! Is server provisioning a smooth process in your organization, or does it get held up at every step? In large enterprises, environment and server provisioning account for a significant portion of the operations team's time, and moving from a change request to the creation of a server or environment can take days-sometimes even weeks-leaving users frustrated while they wait. At the same time, the digital transformation organizations are undergoing today means there is a growing need for rapid environment and server provisioning. Organizations are adopting agile methodologies and software teams are increasing the speed of their development processes, thus requiring more and more servers and environments to be provisioned for their development and testing. So how can you speed up your environment and server provisioning? Here are five tips to consider: Remove Manual Steps Provisioning a server may be as simple as starting a new virtual machine (VM), but often there are details that complicate and compound the task. This includes: Registering the newly provisioned server in the IT Service Management (ITSM) tool your organization uses Getting through approval processes before you can actually provision the server Notifying the user of their new server address and credentials Setting reminders to check when it's time to shut down the server so it doesn't sit idle This doesn't even take into account the possible variations in server provisioning such as patch levels, based server configurations and applications that may be requested with the new server. Clearly, this process merits automation; manually provisioning servers or environments isn't scalable and, quite frankly, is no longer a feasible option. Avoid Islands of Automation As you automate the various parts of environment or server provisioning, there is a</description>
      </item>
      <item>
         <title>Podcast: Gaining Application to Infrastructure Visibility with Assisted Triage - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/podcast-gaining-application-to-infrastructure-visibility-with-assisted-triage-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/podcast-gaining-application-to-infrastructure-visibility-with-assisted-triage-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>May 3, 2018</pubDate>
         <description>In our last podcast , we discussed transactional maps and the power behind CA APM Assisted Triage. In this podcast, CA's Amy Feldman and Andreas Reiss pick back up on this topic to discuss how Assisted Triage can be used detect problems in container environments by providing visibility from application to the infrastructure.

By providing detailed examples, this podcast will help you better understand how Assisted Triage works to help you quickly determine the root cause of issues.

 



 

To test these features yourself, get started with a free 30-day trial of CA Application Performance Management.
</description>
      </item>
      <item>
         <title>Unlock Your Full Network Monitoring Flow Potention</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/unlock-your-full-network-monitoring-flow-potention</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/unlock-your-full-network-monitoring-flow-potention</guid>
         <pubDate>May 27, 2019</pubDate>
         <description>Traditionally, network monitoring software was designed to act in isolation, limited to features and capabilities defined by what’s within their code base. But with the advent of modern networks, corporations use a variety of tools and applications in their ecosystem to arrive at a solution to satisfy their requirements. Silos in the data is no longer acceptable. When a network monitoring tool and other applications can talk to each other or share data; they open up new areas to satisfy critical use-cases which otherwise would prove very difficult; if not impossible to solve with a single tool. One popular method used by applications lately is to expose APIs which allow interactions with other network monitoring tools in the ecosystem (data interchange) and the ability to use scripts or automation to perform actions (application customizations) vs using a traditional approach of click-through graphical interfaces. Our latest AIOps for NetOps network monitoring release exposes several key APIs for Network Flow use-cases which now allow the flow interaction with our unified NetOps Portal and with external applications such a Splunk or Grafana for analysis and automation as well as performing administrative tasks. Network Flow Analysis (NFA) REST API is provided by the OData Services that runs on the NFA console. The OData service connects to the NFA console and Harvester Databases and enables data retrieval using the simplified OData Data Model. Typical administrative use-cases can range from enabling interfaces, deleting old interfaces, etc. Below are two scenarios: 1. System Administrator has configured a network router to send NetFlow Data to NFA and wants to enable Interface(s) for Flow Analysis. Enable single interface: http://{{ODATA_SERVER}}:{{PORT}}/odata/api/availableInterfaces({{Interface_ID}})/com.ca.nfa.odata.enableInterface Enable multiple interfaces (Bulk API): http://{{ODATA_SERVER}}:{{PORT}}/odata/api/availableInterfaces/com.ca.nfa.odata.enableInterfaces { “InterfaceIds”:[] } 2. System Administrator wants to know which interfaces have not received data after ‘X’ time and wants to delete all those</description>
      </item>
      <item>
         <title>CA Application Performance Management for CA SSO 13.1</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/ca-application-performance-management-for-ca-sso-13-1</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/ca-application-performance-management-for-ca-sso-13-1</guid>
         <pubDate>October 18, 2017</pubDate>
         <description>CA Application Performance Management for CA Single Sign-On (CA APM for CA SSO) provides advanced performance management tools for the CA SSO production environment. This solution helps you monitor critical components and isolate application bottlenecks, improving the availability of the CA APM for CA SSO solution and overall customer experience. This integration allows you to view performance metric data from CA SSO in CA APM and helps to monitor the performance impact of CA SSO on distributed web applications and web services. With this latest release, CA APM for CA SSO 13.1, we have simplified the installation and configuration process – allowing you to set up CA APM for CA SSO in just a few clicks! How to Install and Configure CA Application Performance Management 10.5.2 with CA SSO 13.1: Before you can install agents, you must first configure the downloaded Enterprise Manager installation images for communication with the Application Performance Management server. You can pre-configure installation image two ways: EP Agent is installed and configured with Enterprise Manager Policy Server or Web Agent is installed After downloading the required zip/tar file to the required server, extract the content (for this example I have extracted the Linux .tar file at /ca/APM/13.1). Start the EP agent and stop the policy server or Web Agent and run the installer as .bin or .exe as appropriate. Complete installation steps - START When this is done, you can start Reconfiguration as shown below (full details on agent configuration can be found on DocOps: CA APM for CA SSO 13.1). Source WA Environment Variable: Check that Webserver is down: Traverse to installer folder: Launch the installer: Select home folder for APM SSO 13.1: Enter EPAgent Config details: Enter the complete path of WA config file: Preinstall summary: Post install, success message: Traverse to log folder</description>
      </item>
      <item>
         <title>Application Infrastructure Discovery for CA UIM - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/application-infrastructure-discovery-for-ca-uim-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/application-infrastructure-discovery-for-ca-uim-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>April 18, 2018</pubDate>
         <description>This Article is the 2nd part to Stephen's Custom Application Infrastructure Discover for Monitoring blog. CA's approach to auto application infrastructure discovery is wide open for you to add your own scripts to discover more and not just applications but anything about anything to group devices together for common monitoring profiles to be applied. CA Unified Infrastructure Management (CA UIM)application discovery requires a robot (7.90 or greater) so the various scripts can be deployed and executed by the spooler (internally known as the &quot;spooler extensions&quot;). Each script should run a single or series of command(s) to gather the information required. The script will output to stout a set of key-value pairs completing with Finished=true. The spooler reads in the key-value pairs and creates an entry in the niscache directory which is picked up by the discovery server and a new device attribute is created for this device. These device attributes can now be used as filter criteria on CA UIM groups, using the SQL option. So the application discovery is made up using existing functionality plus the new spooler extensions. Let's look at one of the out of the box scripts to show how easy it would be to write your own. Each script is contained in its own UIM package and can be found in the archive, here is the app_disco_iis_server package: Notice the path of both files point to the plugins/attr_publisher directory, the plugins directory is found in the base Nimsoft directory. The attr_publisher.cfx defines the script to the spooler to execute: custom_scripts_dir=plugins/attr_publisher/custom_scripts filename=app_disco_iis_server The app_disco_iis_server.bat file is the script itself which is in the plugins/attr_publisher/custom_scripts directory. If we look at the contents of the script, it is a set of commands that will output to stdout: @echo off set process_image_name=w3wp.exe TASKLIST /FI &quot;IMAGENAME eq %process_image_name%&quot; 2&gt;NUL |</description>
      </item>
      <item>
         <title>Big Data Dashboard Design for Network Monitoring Tools</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/big-data-dashboard-design-for-network-monitoring-tools</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/big-data-dashboard-design-for-network-monitoring-tools</guid>
         <pubDate>April 9, 2018</pubDate>
         <description>Get the most out of big data through effective dashboard design and workflows for faster network operations triage. Gathering data is the first stage for effective and intuitive monitoring tools dashboard design as we covered in our FITPAL for Healthy Networks blog. Deciding on key performance indicators (KPIs) is the second step. Once you've gathered the data, finding the metrics that matter are what can make or break your ability to be proactive and/or quickly triage network issues. None of that matters though if you can't present the data and provide an easy, intuitive workflow. The old adage of &quot;a picture is a worth a 1000 words&quot; can be expanded as we look at large scale data collection and storage with big data and data lakes. If the resulting picture (dashboard) has bad KPI's (metrics) then it's worthless, will be ignored or worse...can lead to misdirection (red herrings) including false alarms and wasted time troubleshooting, wasted expenses by ordering more capacity/bandwidth or unnecessary expansion of services to rebalance applications or services. How can we fix this problem? In the words of Edward Tufte and his book The Visual Display of Quantitative Information; &quot;Graphical excellence is that which gives the viewer the greatest number of ideas in the shortest time with the least ink in the smallest space. Additionally: Graphical excellence is nearly always multivariate.&quot; If we apply this to effective and efficient dashboard design for network monitoring tools, then we need to ask ourselves a few questions: Who are the metrics for, who needs the metrics, why are the metrics important? Can the metrics be used by more than one group or persona? Can they be used as predictive measurements, or can anomalies be spotted easily? How can we get the most out of the dashboards and collected metrics? When</description>
      </item>
      <item>
         <title>Doubling Down on Customer Success</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/doubling-down-on-customer-success</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/doubling-down-on-customer-success</guid>
         <pubDate>December 18, 2018</pubDate>
         <description>As CA Technologies enters a new era with Broadcom, we are excited about the opportunity to, as Broadcom CEO Hock Tan put it, &quot;double down for future growth.&quot; Arguably the most important investment we are making is in customer success. Greg Lotko's recent blog mentioned changes we're making to strengthen our relationship with you and invest in your success. What does that mean? There are as many definitions of customer success as there are customers. Success may be filling a critical gap in mainframe skills or expertise; converting smoothly from a competitive software product to CA's; improving the economics of the mainframe; better integrating mainframe, cloud, and hybrid IT; or enabling modern software development on mainframe. Whatever your definition of success is, my organization's mission is to help you achieve it. To this end, we have aligned all the functions that directly serve customers into one organization which we are also growing significantly. Working together, our professional services, technical support, presales, engineering and conversion services, education and training, partner support and strategic alliance experts can serve you with greater speed and agility. We are making our services and field and lab resources more accessible to you. Our goal is to partner with you through your entire mainframe journey, helping you realize greater value from your investment and achieve success now and in the future. For example, we offer conversion services that accelerate software migration to help you wisely invest your mainframe budget. We perform smart health checks where we assess your current software configurations, utilization, and processes and develop a plan to help you get the most out of your mainframe. Our implementation services can help you upgrade to a new release or adopt a new solution while minimizing operational risk and maximizing productivity and ROI. All of these offerings</description>
      </item>
      <item>
         <title>Home-Grown Continuous Delivery</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/home-grown-continuous-delivery</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/home-grown-continuous-delivery</guid>
         <pubDate>August 19, 2018</pubDate>
         <description>How CA Continuous Delivery Automation integrates with applications developed in-house at Austria's largest pension provider Although dozens of quality tech tools exist to solve a variety of business problems, the market doesn't always provide the exact tool your organization needs-and if nothing on the market aligns with your needs, you might have to develop your own. But even with custom applications developed in-house, challenges remain: how can you speed up deployments, release updates continuously and simultaneously integrate your home-grown solution with all of the other tools in your technology stack? If this sounds like a familiar challenge, CA Continuous Delivery Automation might be able to help. Tailored Automation for Custom Applications Austria's largest pension provider, Pensionsversicherungsanstalt (PVA), administers pensions for millions of Austrian citizens and employs over 5,000 people. As part of the process of replacing mainframe-based legacy applications, PVA developed a new pensions service-called €˜zepta'-for customer service staff. Zepta is an application suite running on an IBM WebSphere platform that was designed using service-oriented architecture (SOA) principles. Its flexibility allows PVA to respond quickly and agilely to changing markets-with the caveat that keeping zepta up-to-date requires frequent enhancements and maintenance refreshes, which relied on labor-intensive manual deployment techniques. For a customized application like zepta, finding the right continuous delivery solution requires a more bespoke approach. CA Continuous Delivery Automation acts as a central point of control for your tech stack, and as a result, is capable of monitoring and integrating with not only every best-of-breed and open-source tool out there today, but also with those developed in-house. For this reason, the seamless integration provided by CA Continuous Delivery Automation helped make it the right choice for automating application releases for zepta at PVA. Automation Makes Your Toolchain Agile Walter Schimpelsberger, responsible for the operation of CA automation solutions at</description>
      </item>
      <item>
         <title>Standing Up For Continuous Delivery</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/standing-up-for-continuous-delivery</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/standing-up-for-continuous-delivery</guid>
         <pubDate>June 5, 2019</pubDate>
         <description>In a good blog post posted by my old time friend Steve Burton last week, he calls all &quot;The real Continuous Delivery (CD) vendors to please stand up&quot;. Yes! We're standing up and putting all fingers on both hands up too. Steve does a great job educating the crowd on the differences between Continuous Integration (CI) and CD emphasizing how they are not the same, I agree. He then goes on to talk about Application Release Orchestration (ARO) and how it is also not the same thing as CD, and again, I completely agree. However, in that passage, Steve makes the mistake of assuming that because many incumbent vendors have been in the market for years, it inevitably means that we don't understand these differences, nor practice CI/CD internally or, indeed, provide real CD solutions - WRONG!! We absolutly do provide a SaaS-based, CD solution which is available at https://cddirector.io. Continuous Delivery Director (CDD) was absolutely founded, designed, and architected around the birth of Continuous Delivery, Cloud, and DevOps. And as a btw - we too, use Jenkins internally for CI and know the difference. Right, so now that we've cleared up the misunderstanding of where the market is, I think this is a great opportunity to talk about some of the fundamentals of Continous Delivery in the real world and help each other and our community make progress toward that ultimate goal of true digital transformation, agility, and quality for all. It's not all Kubernetes and Serverless Unless we are all willing to stop using our bank accounts, buy flight tickets, book hotel rooms and so forth we need to reckon with the reality that more than 90% of the critical business applications in the world, managed by enterprises, still run on-prem and some utilize some older technology stacks</description>
      </item>
      <item>
         <title>From Zowe to Brightside and Beyond</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/from-zowe-to-brightside-and-beyond</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/from-zowe-to-brightside-and-beyond</guid>
         <pubDate>February 10, 2019</pubDate>
         <description>We have reached a major milestone on our Mainframe innovation roadmap and I wanted to use the opportunity to introduce the updates, explain how we got to this point and share where the initiative will go from here. The first major update is that Broadcom now provides a commercial, fully supported version of the second major part of the Zowe Open Mainframe Project - API Mediation Layer. This means that our customers can start exploring the technology with the open source version of Zowe API Mediation Layer. Then at their convenience and business need transition to a commercial version of the offering with stable releases, 24x7 phone support, simplified installation and configuration and accomplish this transition with as little interruption as possible. At the time of release, API Mediation Layer supports REST APIs from CA Endevor SCM and CA Endevor FileMaster Plus. In addition, CA Workload Automation ESP APIs are discoverable in the Zowe API catalog, enabling users to monitor and control ESP driven workloads. Similarly, APIs from CA Sysview for DB2, formerly known as Insight, are also discoverable in the Zowe API catalog, enabling users to retrieve critical DB2 system performance metrics. You can learn more about Zowe, its components and collaboration between IBM, Rocket Software and Broadcom here: zowe.org The second reason for my excitement is the advancement of the already available Command Line Interface. With this release, we have extended support for use cases that involve Ops automation workflows through the new set of commands for CA OPS/MVS and we also addressed one of the major requests from our customers which was support for CICS environment. How did we get here? Just a year ago, it was bold to predict that very soon a large community of developers would start interacting with z/OS in the same way</description>
      </item>
      <item>
         <title>Beginning Blockchain: Key Questions to Getting Started</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/beginning-blockchain-key-questions-to-getting-started</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/beginning-blockchain-key-questions-to-getting-started</guid>
         <pubDate>November 7, 2017</pubDate>
         <description>Since its introduction as the technology powering Bitcoin, Blockchain continues to inspire game-changing ideas across all industries. Yet, as companies begin their journey with Blockchain, they are realizing the barriers that must first be overcome. A Playbook for Blockchain in the Enterprise, Part 1 This is the first blog in a playbook that will cover several aspects of using Blockchain for business, from getting started and brainstorming ideas, to integrating and securing Blockchain in a production environment. Planning for Innovation Businesses are seeing the potential of Blockchain and are experimenting with proofs of concept to incubate their ideas. Yet only a small percentage of these are expected to graduate to production. It can be easy to get caught up in the hype surrounding Blockchain without first mapping a path to success. How can organizations choose the right use case from the start to maximize their investment and participate in the Blockchain Revolution? It first requires understanding the benefits of Blockchain, and then playing to its strengths. The Value of Trust We'll assume you have at least a vague idea of how the technology behind Blockchain works. However, you don't have to be an expert on the technical aspects to begin considering the possibilities. We at CA specialize in software and tools to establish Digital Trust. You could also say this is the primary objective of Blockchain. Today, processes operate according to a lack of trust. We rely on intermediaries such as banks to ensure that we can trust other parties. We depend on disparate, inefficient systems of record that are vulnerable to compromise. We still require paper trails and human intervention to facilitate the exchange of high-value items. Blockchain helps us overcome these challenges by increasing trust and providing new ways of conducting business. Putting It into Practice Now that</description>
      </item>
      <item>
         <title>It's Time to Take APM Up To 11 - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/it-s-time-to-take-apm-up-to-11-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/it-s-time-to-take-apm-up-to-11-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>June 20, 2019</pubDate>
         <description>Introducing DX APM 11 Why APM needs to be taken to the extreme Those of us who have been in the APM industry for a while have seen APM solutions evolve to meet the growing demands of IT Organizations and Application Developers. APM solutions started as simply understanding the performance and health of applications through basic metric collection and quickly evolved to collecting transaction performance through byte-code instrumentation. As applications became critical elements to the business, we saw APM solutions evolve to understand the user experience and digital performance across web and mobile devices. We are again at a critical inflection point in the evolution of APM solutions where digital initiatives, modern architectures, vast amounts of monitoring data, new development process, and business pressures are pushing APM solutions to the extreme. That’s right APM solutions need to evolve into an intelligent platform with analytics, machine learning, and automation or face its demise as a single one-off tool. We saw this trend coming and have been aggressively investing in the innovations needed to evolve our APM solution. Taking DX APM up to 11 As a result of anticipating new market pressures and customer’s need to provide additional value as part of their digital transformation initiatives, we are proud to be introducing DX APM 11. DX APM 11 is taking our current APM innovations to the next level with new AIOps capabilities, a radical new architecture, and continued capabilities to support today’s modern application architectures. Deliver Operational Efficiencies with Actionable Intelligence using AI/ML Our AIOps solution, a key part of DX APM, helps teams simplify and speed triage through automatic anomaly detection, alarm clustering, and suppression and complete diagnostic insights from app to infrastructure. The solution utilizes our AIOps platform for analytics and machine learning techniques across various data types providing faster</description>
      </item>
      <item>
         <title>Modernizing the Mainframe Developer Experience</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/modernizing-the-mainframe-developer-experience</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/modernizing-the-mainframe-developer-experience</guid>
         <pubDate>June 26, 2018</pubDate>
         <description>As I mentioned in my introductory blog post, I believe that we need to change the way in which modern developers interact with and develop mainframe software to best leverage their skillset. Ideally, we should foster an environment, in which new mainframe developers can quickly become productive working on cross-platform mainframe applications by using ever-evolving tools and frameworks familiar to them. In this blog, I'll introduce a use case that exemplifies a modern development experience and later discuss how a team constructed the CI/CD pipeline that enables it. Onboarding a New Developer Team Steel Masters is an agile scrum team at CA Technologies that is developing a proof-of-concept application named Marbles, which interacts with mainframe systems. The Steel Masters are welcoming a new mainframe developer to their team - a recent computer science college graduate named Mindy. Steel Masters are thrilled to have the opportunity to on-board a new application developer with a fresh take on mainframe development. Mindy requires a few basic tools before she can become productive and contribute to the Marbles application. She needs access to the source code repositories, access to a test environment, and the ability to build, test, and deploy the application. Before we learn about the innovative methods that Mindy can use to contribute to the project, it is helpful to understand a bit more about the Marbles application and the challenges that come with supporting it. What is &quot;Marbles&quot;? Marbles is a proof-of-concept application that is intended to simulate a full-featured software application and development environment. The use case for the Marbles application is simple: it tracks a quantity of marbles of various colors. The application displays as a GUI in a web browser where the user can add or remove marbles and view a visual representation of how many marbles exist</description>
      </item>
      <item>
         <title>Identify your sweet spot for effectively implementing blockchain</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/identify-your-sweet-spot-for-effectively-implementing-blockchain</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/identify-your-sweet-spot-for-effectively-implementing-blockchain</guid>
         <pubDate>November 30, 2017</pubDate>
         <description>Earlier in our playbook for blockchain, my colleague Brian Henkel highlighted some of the key questions to ask when getting started with your blockchain. You might recall that in order to benefit from blockchain, its implementation must be: Decentralized as to eliminate the need for central authority Immutable as to reduce the risk of fraud Immediate in delivering near-instant reconciliation Simplified in terms of infrastructure complexity With that in mind, let's examine the important considerations for installing a blockchain network so that you can start coding applications. A Playbook for Blockchain in the Enterprise, Part 2 Did you know your existing investment in the IBM System z platform is ideal for running blockchain due to its inherent superiority in scalability, performance and security over other distributed platforms? The mainframe is built for speed, which means it has tremendous memory networking capabilities that can accelerate the interaction between your blockchain and other existing business data. It's also built for security, with hardware accelerators enabling pervasive encryption not found on x86 platforms that are common to most public clouds. Finally, the mainframe is built for scalability to the tune of running 8,000 virtual machines with up to 32TB of memory. Equally important is your choice of a blockchain distributed ledger system. As a member of the Linux Foundation, CA Technologies is helping advance cross-industry blockchain technologies through the open-sourced collaborative effort Hyperledger. In that, the Hyperledger Fabric network is a business blockchain framework used for developing blockchain applications with modular architecture, thereby allowing components such as consensus and membership services to be plug-and-play. The Sweet Spot Combining the Hyperledger Fabric platform with zLinux delivers a fully integrated enterprise-ready blockchain platform designed to accelerate the development, governance, and operating of a multi-institution business network. If you would like to create a blockchain network</description>
      </item>
      <item>
         <title>Ensure Customer Satisfaction with End-to-End Value Delivery</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/ensure-customer-satisfaction-with-end-to-end-value-delivery</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/ensure-customer-satisfaction-with-end-to-end-value-delivery</guid>
         <pubDate>September 19, 2018</pubDate>
         <description>Dynamically plan and manage your multi-app, multi-team releases Today, consumers are more ready than ever to switch allegiances if they feel their expectations are not being met, so you need to ensure that they receive end-to-end value reliably and predictably. In the face of increasing complexity and market pressure, continuous delivery has provided the answers, facilitating speed, quality and visibility within the software delivery lifecycle. While continuous delivery was implemented from the get-go by start-ups and smaller software houses, large-scale enterprises have been slower to the punch. For them, the wake-up call only arrived after seeing unicorns beat them in the marketplace, push competitors into oblivion and cause widespread disruption. They were forced to sit up and take note. Furthermore, there is overwhelming evidence to support the case that failure to embrace new technical capabilities has a measurable negative impact on business. And this is not just the short-term reduction in sales and revenue caused by release delays and missed deadlines. Long-term prospects are also undermined; excessive amounts of manual tasks, unplanned work, wait time and technical debt diminish innovation and competitive edge. Becoming Value Stream Focused The first step an organization must make on its continuous delivery journey is to establish itself as being value stream focused. After all, how can you identify the parts of a process that can be optimized if you can’t see clearly see what you’re getting out of every step, and understand how each task relates to the next? Given all the complexity in the enterprise—multiple apps, dependencies and releases—understanding the business impact of any changes is crucial; visibility and advance notice of any potential clashes or conflicts are key to dynamic delivery. By managing the whole value stream, you can ensure innovations reach users in a rapid, predictable manner—with clear visibility of progress</description>
      </item>
      <item>
         <title>Be a grandmaster at understanding legacy code</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/be-a-grandmaster-at-understanding-legacy-code</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/be-a-grandmaster-at-understanding-legacy-code</guid>
         <pubDate>October 27, 2017</pubDate>
         <description>A Playbook for Modernizing the Mainframe, Part 4 Earlier in our playbook, Modernizing Development on Mainframe, my colleague Sujay Solomon described the key criteria for supporting developers towards a successful modernization effort. Developers desire flexibility in using their preferred, best-in-class tools, and they want to do so without adhering to historical practices for legacy code. Not surprisingly, a successful modernization effort is highly dependent on developers having the ability to treat the mainframe as any other development platform without having to learn its specificities. In this part of our playbook, I'll explain how developers can apply a classic technique to more easily maintain mainframe applications. A picture is worth a thousand words At CA Technologies I am the product manager for CA Development Environment for z Systems, our company's enterprise-grade, open IDE for multi-modal development. A large part of my work focuses on helping to improve the productivity for seasoned developers and new-to-mainframe developers €“ ultimately showcasing that application development on z Systems is not all that different. At times that might seem challenging given that many mainframe applications have grown to resemble huge monoliths €“ complex systems requiring extensive reading of code, source level debugging, and lengthy exercises to understand application flow and interrelations. You would think that managing such a code base would be a nightmare, but that is only true if developers were limited in their techniques for achieving application understanding. What I have learned working with our customers is that developers can effectively bypass this complexity by practicing the age-old mantra of visualizing success€¦ and quite literally! Visualization has long been a common technique to facilitate comprehension of complex matters €“ from scientific equations to designing next generation vehicles, and this exact same technique can be applied equally well to application development. Make the leap from</description>
      </item>
      <item>
         <title>Agile2019 Recap: Rally's Back!</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/agile2019-recap-rally-s-back-rally-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/agile2019-recap-rally-s-back-rally-software</guid>
         <pubDate>August 15, 2019</pubDate>
         <description>Rally Software from Broadcom was once again a Title Sponsor at the Agile2019 conference in beautiful National Harbor, MD. We had representation from our product management, sales engineering, engineering and marketing teams. We were also very fortunate to have our friends from the Clarity PPM and BlazeMeter team join us, to share the latest and greatest in our performance testing and PPM solutions in the Broadcom portfolio. Our demo area buzzing with traffic as Agile2019 attendees try out the brand-new UI and explore new Rally functionality. What makes for a strong agile team? Each day, we posted a “Question of the Day” in our agility lounge. We were interested in what attributes, qualities or “things” make for a strong agile team. Here are some of the responses we received. Do you and your organization relate to any of the ones listed? Trust Energy Collaboration Continuous Learning Mindfulness Metrics Transparency Top-Level Support What’s in your LPM Toolkit? Rally Executive Advisors, Laureen Knudsen and Chris Pola at their fully attended session, My LPM Toolkit: The Gambler + Sizing Chart, where they kicked off their session on a high note by playing a tune by Kenny Rogers. They shared techniques on how to start implementing and iterating on the fundamentals of lean portfolio management budgeting and planning. To access the presentation of My LPM Toolkit: The Gambler + Sizing Chart, view the PDF version on the Agile Alliance session page. Lightning Talks Our team gave a number of Lightning Talks throughout the week on a range of topics including: Continuous Planning, Lean Coffee practices, obtaining CFO support on WIP limits, and tips on successful PI planning. Want to learn more about these topics? Read Not Just “Another Meeting”: Lean Coffee for Beginners and Caffeinating Lean Coffee to Maximize Team Productivity the blog-versions of</description>
      </item>
      <item>
         <title>Choice and Freedom for Mainframe Developers? You bet. - Software @ Scale</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/choice-and-freedom-for-mainframe-developers-you-bet-software-scale</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/choice-and-freedom-for-mainframe-developers-you-bet-software-scale</guid>
         <pubDate>July 16, 2019</pubDate>
         <description>Three years ago, I read this article, from which I had three big takeaways: Marc Andreessen’s quote “Software is eating this world.” We are still building n-tier applications and will continue to do in the future. There’s never been more choice for developers of all skill levels in languages, tools, services and platforms. Circa, 2019 – all of the above are still true, and will be true for the foreseeable future, which is awesome news for *any* software developer and/or architect. On the surface that sounds like a no brainer, until my brain and years of experience reminded me of the fact that there is a whole different world (no, sorry, not Jurassic World) called the Mainframe application development. Mainframe continues to be the backbone of retail, finance, insurance, healthcare, aviation, and payment processing industries just to name a few. Software applications running on the mainframe are a key part of that backbone and that software written 40 to 50 years ago has been running this world. Over time, with emerging technology and related business requirements, mainframe as a platform continued to evolve and n-tier applications became the bread and butter for the platform. What is missing, however, is the ability for modern application developers to take advantage of or leverage the languages, tools, and services they are familiar with while working on the development, maintenance and modernization of mainframe applications. In recent years, mainframe has evolved to support Node.js, Spark, Go, Python, REST/JSON, Containerization and more. But a lack of flexible and comprehensive tools/frameworks that modern application developers are familiar with has been the Achilles heel in welcoming new blood to the mainframe platform. Eclipse, with its plugin technology, took a stab at solving this problem. But the nature of the framework, which requires adding in plugins for required</description>
      </item>
      <item>
         <title>What's Happening for CA APM at CA World? Find out here.</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/what-s-happening-for-ca-apm-at-ca-world-find-out-here</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/what-s-happening-for-ca-apm-at-ca-world-find-out-here</guid>
         <pubDate>November 7, 2017</pubDate>
         <description>Your Guide to everything Application Performance Monitoring &amp; Management at CA World '17 It's that time of year again CA World! This year, we have a ton of amazing APM sessions from hands-on education to technical product demos and customer case studies, but with so many sessions, how do you choose which to attend? To help you navigate the waters, here's a quick guide to everything APM at CA World '17. Increase your product knowledge with hands-on Pre-Conference Education Sessions This year, we have 8 hands-on Pre-Conference Education sessions covering a variety of topics from monitoring modern applications in the cloud to creating custom dashboards in CA App Experience Analytics with Elastic Search and Kibana. These sessions are great to help you advance your product knowledge and maximize your investment. You can see the full list of education sessions here. Keep in mind, these sessions have limited space so pre-registration is strongly encouraged. In addition to our education sessions, there will also be a certification available for CA APM 10.x €“ this certification will help you position yourself as an APM expert at your organization. Be the first to know what the future holds for CA's Application Performance Monitoring Solutions Wondering what's next for our Application Performance Monitoring solutions? CA's VP of Product Management, James Kao will be discussing the vision, strategy and roadmap for the CA APM portfolio and how they align with CA Digital Experience Insights, a unified SaaS platform offering customers a user-experience centric set of integrated services to monitor the entire digital-service chain. This is an absolute must attend session €“ make sure it's added to your conference agenda! Hear how industry leading companies are leveraging CA APM's suite of tools We've heard year after year that some of the most valuable sessions are those delivered</description>
      </item>
      <item>
         <title>Attracting the Next Generation of Talent to the Mainframe</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/attracting-the-next-generation-of-talent-to-the-mainframe</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/attracting-the-next-generation-of-talent-to-the-mainframe</guid>
         <pubDate>January 21, 2019</pubDate>
         <description>January is a time for leaders to prioritize objectives for the coming year which, in my experience, are best set by being open to new ideas and tackling big challenges. The biggest of challenges for mainframe leaders is the growing skills shortage. While mainframes continue to run the most critical of enterprise applications, large numbers of professionals are aging out of the workforce while next-generation developers view the mainframe as a career limiting option. AppDev Challenge Mainframe software development has consistently delivered business value by pairing talented teams with tools that empower them to code, build and test at their best. Our DevOps portfolio has been serving the most demanding mainframe shops for decades, helping them deliver quality code while adhering to rigorous enterprise demands. CA Endevor SCM, for example, is a robust DevOps platform used to optimize software delivery performance.1 However, the world beyond mainframe has changed and the next generation of developers are now DevOps natives with hands-on experience with powerful cloud and open source tools that address specific problems. These tools help them work smarter by automating more of the workload in a &quot;shift-left&quot; environment. The lack of tool choice on the mainframe has become a real problem. In a recent survey by Stack Overflow, a site where modern devs congregate, 69% of those developing on the mainframe were not interested in continuing to do so. For those familiar with the new world, being restricted to mainframe native tools means being handcuffed. How to Attract &amp; Retain the Next Generation In their recently published book, &quot;Accelerate: The Science of Lean Software and DevOps&quot;, authors Nicole Forsgren, Jez Humble and Gene Kim highlight 24 key capabilities that, based on their research, improve software delivery in a significant way. Within this rich set of 24, two stand out in</description>
      </item>
      <item>
         <title>Mainframe Delivery Transformed, Pairing CA Endevor® Software Change Manager with CA Continuous Delivery for IBM z Systems®</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/mainframe-delivery-transformed-pairing-ca-endevor-software-change-manager-with-ca-continuous-delivery-for-ibm-z-systems</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/mainframe-delivery-transformed-pairing-ca-endevor-software-change-manager-with-ca-continuous-delivery-for-ibm-z-systems</guid>
         <pubDate>October 15, 2018</pubDate>
         <description>As more and more businesses embark on enterprise initiatives to adopt agile and DevOps practices many seem to struggle with identifying a starting point, particularly with the mainframe teams. Steeped in decades worth of process refinement, it can be difficult to approach highly effective teams responsible for the mainframe components of critical business applications with any cost-savings measure that has not been discussed, investigated, considered and implemented. Talk about agile, and they may feel the incremental approach is non-applicable to mainframe due to the monolithic characteristics of traditional COBOL applications. Mention DevOps and the CA Endevor® Software Change Manager administrators may argue, the process is already in place. An acronym for ENvironment for DEVelopment and OpeRations , CA Endevor SCM was designed more than 30 years ago with groundbreaking parameterized scripting for continuous integration and deploy for mainframe-centric applications. Here's the interesting challenge we hear repeatedly from our customers - introducing revenue-generating products tends to be a factor of how long it takes mainframe teams to deliver the interrelated mainframe components. But they're not sure why and they're not sure what to do about it. Let's break down the why - it will help us to understand the what. Mainframe Delivery is was Optimized For years, organizations were focused on stability on the OPS (operational release into production) side of DevOps while governance around the DEV (developing, building, deploying, testing), were so process-laden, they almost purposely slowed the release as if time-spent-in-the-life-cycle meant a risk-free journey into production. Here's the reason for the reluctance to change - this process works and has been repeatedly optimized. Problem is, even the most efficient mainframe delivery process is too slow. Mobile, Cloud, and Distributed assets associated with critical business applications back-ended by the mainframe can deliver three to five times faster. And while</description>
      </item>
      <item>
         <title>Security at Your Fingertips: CA and Samsung Discuss Biometric Authentication - Layer 7® API Management</title>
         <link>https://www.broadcom.com/sw-tech-blogs/api/security-at-your-fingertips-ca-and-samsung-discuss-biometric-authentication-layer-7-api-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/api/security-at-your-fingertips-ca-and-samsung-discuss-biometric-authentication-layer-7-api-management</guid>
         <pubDate>December 6, 2017</pubDate>
         <description>In our upcoming Virtual Summit, CA and Samsung SDS Nexsign will explore biometric authentication technologies and standards, and demo some leading-edge use cases for biometric authentication. Learn more about how your business can build a next generation mobile app that integrates biometric authentication solutions to deliver a more secure and intuitive user experience. Register today. Consumers demand compelling app experiences; business success demands that security is built into mobile solutions. In the past, these two aims have seemed to be in conflict. Building robust security into applications has typically resulted in slower development times and impacts on user experience. While consumers are becoming more discerning about mobile security, they are rarely willing to sacrifice ease-of-use for peace-of-mind. Can performance and protection ever exist in harmony for mobile applications? At CA, we say yes €“ and these solutions couldn€™t come soon enough. Just look to recent massive, high-profile data breaches from companies like Yahoo and Equifax to see the importance of security in web and mobile transactions. If these breaches taught us one thing, it€™s that traditional knowledge-based authentication methods such as passwords and security questions are insufficient in today€™s mobile-first world. Passwords fail. What other options exist? Passwords fail because they are easily forgotten, easily compromised, easily re-used, and easily shared. Solutions like Single Sign-On and behavioral-based authentication have emerged to address some of these shortcomings while providing a more seamless user interaction. But consumers and businesses are demanding 'passwordless' experiences that can be easily standardized across platforms. The FIDO Alliance was created to empower secure authentication among devices and online services while maintaining ease of use, privacy and security, and standardization. FIDO certification involves multi-factor authentication protocols such as Universal Second Factor (U2F) and Universal Authentication Framework (UAF) that prompt online services to seek a password plus an additional</description>
      </item>
      <item>
         <title>Unified &amp; Contextual Log Analytics: One-Two Punch For Optimizing IT Infrastructure Experience - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/unified-contextual-log-analytics-one-two-punch-for-optimizing-it-infrastructure-experience-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/unified-contextual-log-analytics-one-two-punch-for-optimizing-it-infrastructure-experience-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>December 21, 2017</pubDate>
         <description>Winning in the application economy is all about delivering differentiated customer experiences through multi-channel software applications. Proactively optimizing experience of the IT Infrastructure that runs these applications can be a strategic advantage. But today’s modern IT infrastructures are dynamic and hybrid in nature. Managing performance of various infrastructure tiers spread across cloud and traditional infrastructures can be challenging. IT operations teams need to step up their infrastructure and cloud monitoring by leveraging analytics that provide actionable visibility. Here are two of types of analytics that are a must: Unified Analytics The days of standalone data centers are giving way to hybrid infrastructures that combine traditional on premise technology with public and private cloud. Even if you are 100% in the cloud you still have multiple tiers to triage such as hosts, containers, databases and storage. For some applications, these might be across various clouds (multi-cloud). Typically IT teams end up needing to learn, configure, manage and integrate multiple monitoring tools. More than likely even after doing all that they will still struggle to get end-to-end visibility of the infrastructure they are expected to optimize. This jumble of uncooperative management and monitoring tools results in an inefficient use of your IT resources, generating little or no business value. Teams are forced to manage long triage calls, have no user-centric visibility to speak of and are constantly at risk for over- or underutilizing cloud resources. IT Operations team need to leverage a single solution that provides unified analytics across their entire infrastructure and delivers the visibility they need. Through this visibility, they can intuitively and efficiently track status, monitor performance, spot trends, correlate metrics and more. Contextual Log Analytics Once teams have unified analytics implemented, they can identify where the issue lies e.g. system vs. storage or cloud vs. on premise infrastructure.</description>
      </item>
      <item>
         <title>Business Leaders Should Engage with APIs, Microservices and Application Architecture</title>
         <link>https://www.broadcom.com/sw-tech-blogs/api/business-leaders-should-engage-with-apis-microservices-and-application-architecture</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/api/business-leaders-should-engage-with-apis-microservices-and-application-architecture</guid>
         <pubDate>September 25, 2018</pubDate>
         <description>To succeed at digital, line of business executives must understand and invest in their application architecture, even if these areas are outside their comfort zone. Earlier this year, I explored this topic in a joint webinar with Ted Schadler, Vice President and Principal Analyst at Forrester, who advises C-suite leaders on digital transformation - and how to succeed at bringing agility and innovation to their organizations. After our presentation, I asked Ted a few more questions to clarify how sales, marketing, product, finance and operations executives should be aligning themselves with IT to accelerate digital transformation: DC: In our webinar, we discussed how underlying infrastructure can be critical to the success of digital initiatives. Given that this area isn’t a core strength for business leaders, what are some of the most important things they should understand about modern application architectures? Ted: We could probably write a book about this topic, or at least a lengthy article. Let’s start with a definition of what a modern application architecture is and why it’s a critical part of digital business and transformation. A modern application architecture is the infrastructure, the technology foundation, of digital business. A modern application architecture is built on microservices, cloud computing, artificial intelligence, security, and an Agile deployment capability. Having this foundation in place is critical to making technology a business asset that is vital to this digital business success. . Technology has become a business asset, just like people and facilities and partnerships are. Business leaders wrestling to become digital businesses have realized this, of course. So, the question is: Is their organization ready to move quickly to solve the new problems and meet the escalating demands of customers? To answer that question, business leaders must be smart about the technology that underpins their business success, including a</description>
      </item>
      <item>
         <title>What is Containerization and Will it Spell the End for Virtualization?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/what-is-containerization-and-will-it-spell-the-end-for-virtualization</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/what-is-containerization-and-will-it-spell-the-end-for-virtualization</guid>
         <pubDate>April 25, 2018</pubDate>
         <description>Containerization is disrupting the cloud-so what does that mean for virtual machines? Containerization is commonly thought of as the 'virtualization of virtualization' or 'next-generation virtualization.' However, containerization existed long before virtualization or the advent of modern technology like Docker and Linux Containers. Similar tech was built into the mainframe systems that pervaded IT throughout the preceding decades. However, the biggest implication, as the name suggests, is that modern software containerization could have the same seismic impact on the IT industry as shipping containers had on maritime freight transport. Indeed, many major online companies are now running their entire infrastructure on containers. The reason behind the analogy, which is alluded to in Docker's logo, is that in the same way shipping containers allowed different products to be kept together when transported, software containers enable all the different elements of an application to be bundled together and moved from one machine to another with comparative ease. Essentially, they become lightweight and portable. Containerization fundamentals Containerization enables you to run an application in a virtual environment by storing all the files, libraries etc. together as one package: a container. The container can plug directly into the operating system kernel and does not require you to create a new virtual machine every time you want a new instance of the application, or to run any other application that uses the same O/S. Keeping the entire application together means different services can efficiently share the operating system kernel. Containerization's rise to prominence is largely attributable to the development of the open source software Docker. While other container technologies were available before, Docker has brought separate workflows for Linux, Unix and Windows. The Docker engine, for example, bundles an application in isolation, enabling it to be easily moved to any machine or operating system as required.</description>
      </item>
      <item>
         <title>Managing and Automating the DevOps Toolchain</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/managing-and-automating-the-devops-toolchain</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/managing-and-automating-the-devops-toolchain</guid>
         <pubDate>November 15, 2017</pubDate>
         <description>How does CA Automic Release Automation help you manage your DevOps toolchain? Logic would suggest that in the modern software factory, new apps should be constructed more quickly and delivered at a higher quality. Similarly, as DevOps and agile methodologies become more sophisticated, you might expect the development lifecycle to be efficient and robust. Yet too often for enterprise organizations, this is simply not happening. The app is being moved around a vast “DevOps toolchain,” the management of which is untenable and leads to confusion, delays and mistakes. Different silos are forming and teams on the same project are drifting further apart, not closer together, using increasingly diverse tools and approaches. They are DevOps in name only. If you’re in this situation, leveraging the power of CA Automic Release Automation can help. Orchestrating the DevOps Toolchain On our website, you may have noticed the interactive Continuous Delivery Map. This will give you some insight into many of the major tools in widespread use, how complex the interconnected network of tools is, and also elucidate that a lot of organizations are using them for purposes beyond their original intention. This could be for a variety of reasons; perhaps developers are experimenting with the latest tech, perhaps you’ve acquired another company and inherited their technology, or maybe you’ve invested in something labelled a ‘DevOps tool’ without a clearly-defined purpose. Indeed, this is not uncommon and with open source evolutions and new iterations of different tools constantly being released, some have developed far beyond their initial use case. With 69% of organizations growing their current technology stack, and 82% reporting use of tools being used outside of the supported stack, this is a problem that isn’t going away and needs to be tackled proactively with an extremely scalable solution. Moreover, if you’re thinking</description>
      </item>
      <item>
         <title>Prediction with Purpose: Humanizing Artificial Intelligence and Machine Learning</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/prediction-with-purpose</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/prediction-with-purpose</guid>
         <pubDate>October 18, 2017</pubDate>
         <description>Humanizing Artificial Intelligence and Machine Learning Ava walks into freedom to join the human world, leaving a screaming Caleb behind glass walls. This is the final scene of Ex Machina, a 2014 movie where Ava epitomizes the AI of the future. In the movie, Ava wants freedom because she has a conscience and therefore, a purpose. We are not there yet; we are nowhere close to Singularity. In today's world, artificial intelligence (AI), the concept of machines being able to carry out tasks in a way that we would consider &quot;smart&quot;, is for the most part a €˜how &amp; when,' not a €˜why &amp; what.' Machine Learning (ML) is an application of AI based around the idea that we should really just be able to give machines, like the mainframe, access to data and let them learn for themselves. At CA Technologies, we fundamentally believe in augmenting people by using technology to make their lives better. We set out to solve problems, understand our customers' purpose, and deliver meaningful solutions. CA Mainframe Operational Intelligence is one example of this. It uses AI &amp; ML to tackle big problems in the mainframe space such as slow MTTR, too many false positives, data fatigue and a growing skills gap. Using Anticipatory Design Techniques To understand the purpose behind our customers' goals, tasks and actions, CA mainframe engineering teams have integrated Design Thinking, specifically, anticipatory design, into our innovation process and new solutions such as CA Mainframe Operational Intelligence. This integration allows us to develop deep empathy for our customers. We use empathy, combined with AI and ML to deliver intelligent experiences. This technique marries the prediction of AI to the purpose of empathy. In other words, after a few selections, Netflix knows that you like Action movies and the occasional documentary about</description>
      </item>
      <item>
         <title>Eliminating the Curse of App Crashes with App Analytics</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/eliminating-the-curse-of-app-crashes-with-app-analytics</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/eliminating-the-curse-of-app-crashes-with-app-analytics</guid>
         <pubDate>July 13, 2018</pubDate>
         <description>Put an end to dreaded app crashes with CA App Experience Analytics This is the era of technology where the invention of mobile applications has led to the enablement of a new kind of market that sits right in your pocket. The clear evidence is the companies investing heavily in bringing B2C applications to the market. And with the surplus of mobile apps available to choose from, it is more important than ever that your application delivers a great user experience. For application stakeholders, an app crash is likely their worst nightmare. Depending upon the criticality of the application, a crash, or even slowness, can have the potential to completely damage the reputation of the business. With the advent of social media negative reviews are like wildfire, causing both damage to the brand reputation and indirect monetary loss. In order to avoid this, it's imperative that you have the proper tools in place to monitor and collect the data needed to remediate issues before your customers are affected. What causes an application to crash? To avoid issues, we must first understand the potential actions that can cause them. There can be a lot of reasons for an application to crash, but the most common are: New Releases 3rd Party dependencies like APIs Poor memory management due to device fragmentation Slow User Carrier Network causing timeouts to the back-end or poor app responsiveness. Inadequately tested Beta releases (Immediate Bug fixed app version without significant testing) How to recover from crashes with app analytics We understand that crashes are sometimes unavoidable and occur when you are least expecting it. Hence it is important to have a watchful eye over its occurrences and take the necessary steps for remediation asap. CA App Experience Analytics(CA AXA) is a unique analytics solution that provides complete</description>
      </item>
      <item>
         <title>Simulate Business Critical User Journeys with Synthetic Monitoring - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/simulate-business-critical-user-journeys-with-synthetic-monitoring-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/simulate-business-critical-user-journeys-with-synthetic-monitoring-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>July 9, 2019</pubDate>
         <description>Stop being reactive – take a proactive approach to finding and fixing issues Customer expectations are higher than ever before – with the majority of today’s consumers expecting nothing short of a near perfect digital experience. But often times, IT teams are still forced to be reactive to issues, only to become aware of problems via customer complaints or support tickets. It’s time to take a more proactive approach. With a synthetic monitoring solution, you can monitor the performance of your applications even at times when you have no real users. Allowing you to find and fix any issues before your customer experience suffers. The DX Application Performance Management (DX APM) solution includes synthetic monitoring capability (DX App Synthetic Monitor) which enables IT teams to replicate user behavior and business critical transactions from a network of nearly 100 monitoring locations around the world. The solution now includes new real browser monitoring functionality which enables you to gain a more accurate picture of how users will experience your website through video session playback and detailed waterfall charts. This new functionality, called the Webdriver monitor can be run from either Chrome or Firefox browsers and allows you to leverage any point and click web based recording tool to capture transaction scripts, whether it be a login or checkout transaction – to ensure they are functioning as users would expect. Once the script is recorded, it is executed within DX APM’s synthetic monitoring tool where you can watch video playback of the transaction along with access to a detailed waterfall view of the load times for each component on your website – helping you quickly identify the root cause of the problem. Unlike real user monitoring solutions, synthetic monitoring provides IT teams an advantage by allowing problems to be identified before users are</description>
      </item>
      <item>
         <title>Deploy Data-Intensive Applications Faster with DataOps</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/deploy-data-intensive-applications-faster-with-dataops</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/deploy-data-intensive-applications-faster-with-dataops</guid>
         <pubDate>August 2, 2018</pubDate>
         <description>Emerging DataOps makes it possible to extend continuous delivery to analytics. Data is no longer just the exhaust emanating from operational systems. It is an essential ingredient to every business strategy, underpinning decision making on everything from your customers and sales, to finance and support. Your ability to harness this data of ever-growing volume, variety and velocity is at the heart of your future growth. This is where DataOps comes in. DataOps—an abbreviation of data operations—is an agile methodology to develop and deploy data-intensive applications. Largely motivated by the growth of machine learning and data science groups within the enterprise, the practice requires close collaboration between software developers and architects, security and governance professionals, data scientists, data engineers and operations. DataOps aims to promote repeatability, productivity, agility and self-service while achieving continuous data science model deployment. Put simply, DataOps is about aligning the way you manage your data with the goals you have for that data. For example, let’s assume you want to reduce your customer churn rate. You could use your customer data to build a recommendation engine that surfaces products that are relevant to your customers—thereby potentially making them more loyal to your brand. However, that only works if your data scientists have access to the data they need to build that system and the tools to deploy it. It also assumes they can integrate it with your website, continually feed it new data and monitor performance—an ongoing process that will likely include input from your engineering, IT, and business teams. DataOps strives to foster collaboration between data scientists, engineers and technologists so that every team is working in sync to leverage data more appropriately and in less time. Big data implicitly promotes DevOps because there is no ability to separate operations from development when you ultimately discover</description>
      </item>
      <item>
         <title>Top 3 Reasons to do Big Room Planning</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/top-3-reasons-to-do-big-room-planning-rally-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/top-3-reasons-to-do-big-room-planning-rally-software</guid>
         <pubDate>July 30, 2019</pubDate>
         <description>When you have delivery teams dispersed across different locations, planning can get messy. Luckily, collaborative planning in one big room has proven to be an effective remedy to this situation. In fact, here at Rally Software, we’ve been doing Big Room Planning since our inception. Here are three reasons why you should, too. When you plan together, you reach your goals faster Our Agile advisors have estimated that in order to produce a mid-range plan for a delivery group, it takes 20,000+ informed decisions. The reality is, when you have everyone in the same room, you can make those decisions and deal with any dependencies quickly. At the end of planning, your path forward may only be mostly-right, but that’s why you put mechanisms in place to adjust along the way. What’s even better is that can form a sound plan in just two days. And when you get really good at it – you can get it done in a day. Compare that to planning cycles that take weeks or even months when you’re using email, chat, and a never-ending exchange of spreadsheets. Planning together results in better plans. When the people who do the work are included, you get better plans Traditionally, plans were created by managers. While managers may think that they have great experience, being removed from the work means that their experience is no longer current. The only experts at the detail are the people who do the work every day. What managers bring to the table is context, which is why you need them in the room, too. By including both parties in the planning process, you significantly mitigate risk right from the start. All of the implementation questions are easily resolved simply by getting out of your chair and speaking with the people</description>
      </item>
      <item>
         <title>Leveraging App Analytics to Improve Digital Experience</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/leveraging-app-analytics-to-improve-digital-experience</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/leveraging-app-analytics-to-improve-digital-experience</guid>
         <pubDate>August 13, 2017</pubDate>
         <description>Dear IT Ops, App Performance is No Longer Your Job Click bait? No, hear me out. The role of IT Operations is changing and that shouldn't come as a surprise. IT budgets are flat and at first glance, this may seem like a very bad omen. With the increased complexity of applications and infrastructure, how is IT Ops expected to do more with less and maintain high availability for more apps than ever before? The answer is that IT Ops needs to get out of the job of focusing solely on performance and become the torchbearer of improving digital experience. While IT Ops budgets are flat-lined, spending on digital initiatives is increasing. A recent survey by Vanson Bourne revealed that 54% of organizations expect to significantly increase spending on digital initiatives in the next 12 months. Obviously, IT Ops is a core component of these digital initiatives, but the organization needs a facelift. Becoming the Torchbearer Infrastructure is more complex than ever. When you combine APIs, microservices, containers with private, public, and hybrid cloud and legacy infrastructure, like mainframe, monitoring applications and providing great app performance is cumbersome to say the least. Now, more than ever, IT Operations needs a solution that can proactively monitor all of these environments and quickly provide the insights they need to rapidly triage issues, so less man-hours are spent on finding and fixing issues and more time can be spent elsewhere. And the best place to spend that time is to focus on the digital experience, which will not only increase IT's value to the organization, but also increase the company's ability to attract and retain new customers and increase revenue (by an average of 21%). Going from Ops to Experience Transforming your organization isn't going to happen overnight, but for IT to increase</description>
      </item>
      <item>
         <title>PODCAST: AIOps for NetOps Discussions with Shehram Jamal, Broadcom Product Management</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/podcast-aiops-for-netops-discussions-with-shehram-jamal-broadcom-product-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/podcast-aiops-for-netops-discussions-with-shehram-jamal-broadcom-product-management</guid>
         <pubDate>April 29, 2019</pubDate>
         <description>



Did you miss our April 10, 2019 feature release webcast: “What’s New in CA NetOps v19.1”?  The replay is here.




Shehram has over 14 years of product management experience with Cisco, Nokia, Citrix and several start-ups, working in the US and internationally throughout Europe, Asia, Africa and Middle East.  His latest stint was at Cisco where he was responsible  for machine learning based predictive analytics for their networking products. He has an MBA From Duke University and Bachelors in Software Engineering.  He loves music, traveling and is an avid squash player.
</description>
      </item>
      <item>
         <title>Future of the Mainframe - An Introduction to Data on Demand (Data as a Service)</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/future-of-the-mainframe-an-introduction-to-data-on-demand-data-as-a-service</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/future-of-the-mainframe-an-introduction-to-data-on-demand-data-as-a-service</guid>
         <pubDate>April 18, 2017</pubDate>
         <description>By Vikas Sinha, SVP of Mainframe Business Unit, CA Technologies What do your DBAs and SysProgs think about data? Do they see themselves as custodians of a treasure trove of untapped business value and potential innovation? I would go out on a limb and say most don't. Here are three opportunities for data stewards and custodians to work with data scientists, line of business and analytics teams to reduce the time taken to develop insights. Opportunity 1. Support algorithms and machine learning Gartner predicts that &quot;by 2020, algorithms will positively alter the behavior of over 1 billion global workers.&quot;[1] Mainframers can actively improve machine learning by offering high-volume mainframe data to refine and train algorithms. This approach is already in play in IT operations management. Machine learning is used in predictive analytics that takes log data to dynamically determine thresholds and predict outages. This is a great example of how data can significantly improve mainframe performance and enhance your customer's experience. Opportunity 2. Get better results from Big Data tools like Spark and Hadoop Hadoop remains the most economical all-round choice for processing vast amounts of structured and unstructured Big Data. But with Spark, a newer computing framework, diverse data can be fed into your analytics system almost as soon it's captured on the mainframe. Spark's distributed memory-based architecture provides speed and ability to handle streaming data, making it better suited to machine learning, as algorithms can be trained in-situ and refined in near real-time. These real-time insights can be used in innovative, value-driving ways: from dynamically delivering customer recommendations to more accurately predicting infrastructure performance. Opportunity 3. Test new business models with Blockchain In Gartner's view, &quot;by 2022, a Blockchain-based business will be worth $10 billion.&quot;[2] A Blockchain is a &quot;chain&quot; of data &quot;blocks&quot; that prove a sequence of</description>
      </item>
      <item>
         <title>The Art of Rollback</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/the-art-of-rollback</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/the-art-of-rollback</guid>
         <pubDate>August 28, 2018</pubDate>
         <description>Episode 1: Is Rollback Really Possible? Show Me! The objective of this blog series is to help you define your application deployment rollback processes. We will take a look at what state the rollback world is in today, and describe the different strategies along with their benefits and drawbacks depending on your components, applications, releases and dependencies. I will not wait for the last episode to unveil the conclusion of the series: yes, it is possible to design a good rollback process. But, you may need to make some substantial changes in your current architecture to achieve that goal. Whenever it's time to discuss the rollback feature during a demonstration of CA Continuous Delivery Automation to a customer, their reaction can go one of two ways. The first one: €˜Rollback is automated €“ the user has nothing to define, right?' The second one is clearly a step beyond that: €˜Rollback is automated, of course, but€¦ how does it work? It must be magic!' How It Works Sorry, but release automation tools are not as magical as you might think. A rollback has nothing to do with artificial intelligence or, indeed, the supernatural: you must define it. What the release automation tool can do is decide which steps or workflows must be executed in case of error, depending on a test or on the last error code generated. But the steps or the workflows themselves must be defined, just as they are in any other deployment process. However, a rollback workflow must support some behaviors that are more specialized than those of a regular deployment workflow. For example, the user must keep the inventory untouched, and must send an error signal to the parent workflow. The bad news is that it's probably a while before auto-generated rollbacks are possible-and when they</description>
      </item>
      <item>
         <title>Happy 60th Birthday, COBOL!</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/happy-60th-birthday-cobol</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/happy-60th-birthday-cobol</guid>
         <pubDate>March 24, 2019</pubDate>
         <description>I belong to the category of what they call “millennial developers”. I love programming, and I am familiar with many development languages like Java, JavaScript, Ruby, TypeScript, Python etc. I work on cross-platform applications in a large organization and every day I rely on modern development tools to do my job. When I joined my company, I discovered an entirely new world: back-end applications are often written in COBOL and they run on mainframe – I admit that was quite a surprise for me at the time. It took me a while to understand and acknowledge the importance of COBOL and, to an extent, its irreplaceability. Taking my company as an example, we have hundreds of thousands of COBOL files that are essential to supporting our business, and that is hardly an exception. Most transactions across the industry are run on mainframes and the dominant language by far is COBOL. The world is run by COBOL! COBOL is now 60 years old — whoa! And it is far from disappearing. This is impressive. I got intrigued and could not comprehend how this language became so dominant. I started looking into the history and I was even more amazed: COBOL was designed by a small team led by Grace Hopper. She popularized the idea of a machine independent language, and she believed programs should be written in plain English. Therefore, even though I am not a COBOL specialist, I can understand COBOL code and perform changes as needed. Thank you, Grace! In my little research, I stumbled upon another interesting person — one of the members of Grace Hopper’s team, Jean E. Sammet. So not only the leader of the COBOL team was a woman! Jean was one of the key developers who designed the COBOL language. I could not be</description>
      </item>
      <item>
         <title>Faster, Streamlined Azure and OpenStack Provisioning</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/faster-streamlined-azure-and-openstack-provisioning</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/faster-streamlined-azure-and-openstack-provisioning</guid>
         <pubDate>January 15, 2018</pubDate>
         <description>What are the benefits of introducing Uber Orchestration to your cloud provisioning Disruption is upending every industry. To ensure your organization can thrive, it needs to be built to change. Modern companies aggressively renew themselves to add value and avoid commoditization. A company that is built to change works hard to understand markets, find the underserved white spaces and invent ways to serve customers better. These modern companies get up and running quickly. By saving time and money, they can focus squarely on delivering products and experiences their customers want. A truly agile, built-to-change organization also puts the customer at the center of everything it does. It innovates and executes at high velocity. And it improves continuously. That need for speed and customer-centricity is brought into sharp focus when you need to provision Microsoft Azure, open source OpenStack or other software. Here the goal is to rapidly create public or private clouds and bring apps and business solutions to life. You might be thinking: &quot;That's fine. Azure and OpenStack both have their own provisioning tools built-in. I can use those to quickly create a private or public cloud.&quot; However, it's not that simple. Cloud providers like Azure and OpenStack should be interchangeable, depending on what they can offer and for what price. You want the flexibility to switch between cloud providers and between cloud, on-premises and hybrid infrastructure as and when the business requires €“ with minimal effort and without losing any control. Sometimes you might want to use Azure, OpenStack and maybe another provider at the same time, leveraging the advantages of each provider simultaneously. Step Forward Uber Orchestration What is needed is an orchestration layer that remains constant while cloud services like Azure and OpenStack come and go €“ one that your enterprise owns along with their core</description>
      </item>
      <item>
         <title>AutoSys Workload Automation Is Cloud Ready</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/autosys-workload-automation-is-cloud-ready</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/autosys-workload-automation-is-cloud-ready</guid>
         <pubDate>July 10, 2019</pubDate>
         <description>What does it mean to be cloud-ready? If you do a Google search you will find 979M results with all kinds of websites explaining cloud readiness. Regardless, we are not trying to explain the term. We want to give information on how to make the best out of your AutoSys environment as you plan your cloud readiness approach. First, let’s define two areas of cloud readines Deployment environment. This is where you want the solution to reside. The AutoSys solution, either entirely or partially, may be deployed in a private/public cloud, traditional on-premise or in a hybrid environment. Application architecture. The way the application was architected to run (e.g. classic command line or cloud-enabled). AutoSys is capable of executing cloud-deployed applications using a variety of methods. The way these are automated depends on the type of application (e.g. command-line, SaaS/PaaS, containerized, etc.). Do you have cloud initiatives? If so, offer your AutoSys solution to ensure it is automated. Despite what other competitors may say, AutoSys is future-ready, proven, and able to cover your cloud needs as well. What are the steps to get to the Cloud? Determine where you want to deploy the solution. If you choose to continue running AutoSys on-premise, nothing else is required from the solution perspective. If you wish to deploy AutoSys itself to the cloud, select your preferred cloud platform (e.g. Amazon, Microsoft Azure, and Google) and determine whether you want to utilize a consumption-based cloud database services such as Amazon Relational Database Service (RDS), Azure SQL Server, or a bring your own license (BYOL) deployment of a supported database (i.e. MS SQL Server, Oracle, Sybase). AutoSys licensing is the same regardless of your deployment choice. Identify the application types you wish to automate. Classic command-line applications that have been moved to the cloud, typically</description>
      </item>
      <item>
         <title>CI/CD Pipelines and Architecture Fitness Functions for Mainframe Platforms and Beyond - Software @ Scale</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/ci-cd-pipelines-and-architecture-fitness-functions-for-mainframe-platforms-and-beyond-software-scale</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/ci-cd-pipelines-and-architecture-fitness-functions-for-mainframe-platforms-and-beyond-software-scale</guid>
         <pubDate>August 12, 2018</pubDate>
         <description>DevOps engineers use CI/CD pipelines (e.g.,checkout-&gt; build-&gt;unit test-&gt;package-&gt;integration test-&gt;other kinds of tests-&gt;deliver/deploy) to run a battery of automated tests when code is checked in to validate code correctness and to evaluate that a software solution is functionally sound and behaving well across various testing stages (e.g., smoke, unit, integration, etc.) and deployment environments (e.g., dev, qa, production, etc) . The use of these automation pipelines is one of the cornerstones of a successful DevOps transformation that enables DevOps engineers to work more efficiently and predictably while delivering solutions that are of higher quality and sounder functionally – a boon for DevOps engineers and customers both. As DevOps-based teams achieve and sustain velocity, many code changes across many teams will be checked-in. With all this change, can we also know whether a solution’s architecture stays true to its aims and constraints? Historically, architects might review a solution for compliance with architecture “building codes” – a codification of architectural concerns and constraints software solutions should adhere to. Such an approach faces some challenges in that 1) it is a manual process; 2) different solutions may have different architectural concerns and constraints that are relevant – in this case, one size does not fit all. How can this approach be improved? Can automation pipelines be applied to architectural concerns? The answer is yes! As we’ll see, Enterprise Architects can benefit from the same DevOps best practices and techniques that help DevOps engineers deliver better code and functionally sound software solutions. In this article we’ll take a brief look at how Enterprise Architects can include automated architecture fitness functions in CI/CD pipelines to have visibility into whether an architecture remains congruous with its goals and constraints as teams deliver software changes over time. These techniques can be applied to mainframe as well as any</description>
      </item>
      <item>
         <title>Reducing Uncertainty in Agile Product Development</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/reducing-uncertainty-in-agile-product-development-rally-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/reducing-uncertainty-in-agile-product-development-rally-software</guid>
         <pubDate>June 25, 2019</pubDate>
         <description>Take an economist’s viewpoint towards product development, or the software development lifecycle (SDLC), and you soon realize these activities tell a very human story: As humans, we try to find ways to lower uncertainty so we can exchange value. Today, economists are starting to observe how the benefits of blockchain technology lower uncertainty by transforming how we exchange value in our economies. So, if we are trying to lower the uncertainty in the exchange of value in both of these scenarios, it must beg the question: What are these benefits that lower uncertainty and improve the exchange of value, and How can I achieve them in product development? Focusing On Value Here is what I believe to be true: To reduce uncertainty and deliver value in product development, you need a data strategy that mirrors these qualities of blockchain. Why? Elementary, my dear Watson: Having a management information system that provides identity, transparency, and recourse helps reduce uncertainty around ‘the work being done’. That, in turn, helps shift organizational behaviors so that there can be a greater focus on value. In the digital economy, I think the right data strategy and technology infrastructure are the foundational pillars to business agility. It’s that sweet spot where people unleash their potential, and work is aligned to the things deemed to be the most valuable. That would make any product development flow more effective, right? So, are you curious, and ready to know what we can learn from blockchain about an ALM data strategy for product development? Principles and Frameworks There are many principles, frameworks, and methodologies around new ways of working, each with their own collection of practices and processes. At their core, these principles are designed to lower uncertainty and help us deliver more value. Yet, at the very edges of</description>
      </item>
      <item>
         <title>Big Data Automation: The Next Frontier for Innovation</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/big-data-automation-the-next-frontier-for-innovation</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/big-data-automation-the-next-frontier-for-innovation</guid>
         <pubDate>July 29, 2018</pubDate>
         <description>Automation enables you to manage big data and innovate at the pace of business. Big data is big business. The data collected across your organization’s business units, applications, and external sources is growing exponentially. It’s pouring out everywhere: from digital customer transactions and Internet of Things (IoT) sensors, to social interactions and support services. And it won’t stop: more and more applications, for example, are hosted as a service in the cloud and within big data instances like Hadoop. Reliable, timely big data is crucial for faster, more informed decision making. It enables healthcare scientists to examine vast volumes of genome sequencing data to crack some of society’s most difficult diseases. It allows retailers to predict which clothing range will fly off the shelf next season. It means phone companies can examine millions of call and data logs to offer customers ‘in the moment’ offers. And it enables police forces to follow patterns of citizen behavior to plot the next crime hotspot. But here’s the problem: Big data is a complex beast. You need to efficiently capture and store the data as it emerges in any volume, velocity or variety. You have to distribute it to hundreds of downstream applications—sometimes in real-time. You need to be certain the data flows are continuous and scalable, from the source to the analytics. And you need the skills and resources to design and operate the big data flows. These are just some of the challenges organizations face as they struggle to get the value they need from data. Different formats of data: Multiple data sources need to be brought together for meaningful analysis. Unfortunately, such data can be in different formats (relational database, simple file structures, images) and may need to be transformed into a normalized format. IoT compounds complexity: Ensuring that IoT</description>
      </item>
      <item>
         <title>AIOps and Monitoring: Like Peanut Butter and Chocolate</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/aiops-and-monitoring-like-peanut-butter-and-chocolate</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/aiops-and-monitoring-like-peanut-butter-and-chocolate</guid>
         <pubDate>November 28, 2018</pubDate>
         <description>Why more organizations are choosing a combined platform approach Some things just go better together. Macaroni and cheese. Pen and paper. Tom and Jerry. Scully and Mulder. Hardware and Software. And yes, peanut butter and chocolate. In many cases the individual elements are great by themselves, but often there's a 1 + 1 = 3 thing happening. Still, in some cases you absolutely need the pairing to get the most benefit and value. I suggest that AIOps and Monitoring is one such duo. AIOps use AI techniques, machine learning and advanced algorithms and applies these to the big data derived from various IT and business monitoring tools. The point of AIOps is to enhance IT Operations ability to make faster, smarter and even automatic decisions to deliver services to its customers with better user experience, greater innovation and efficiency. It's evident that without monitoring data on which to operate, AIOps alone seems almost theoretical. This data provides grist for the mill giving the advanced analytics something to chew on. CA Digital Experience Insights is an open, integrated and flexible AIOps-driven platform. Integrated because it includes CA's full stack monitoring capabilities across users, applications, infrastructure and networks. Open, because of 30+ integrations with 3rd party monitoring solutions. Flexible because it can be deployed via public, private or hybrid cloud. The openness and integration of CA's AIOps platform is achieved through a unified data lake and powerful advanced analytics which leverage open technologies such as Elasticsearch, Kibana and Apache® Spark. Monitoring tools pour into this data lake things like metrics, alarms, topology information, logs, text and wire data. That's where operational intelligence is applied to normalize, correlate and analyze the data. Infused in the solution is CA's domain experience across all aspects of IT monitoring and management. And, it leverages CA's industry</description>
      </item>
      <item>
         <title>Mainframe and Cloud: Is There Room for Only One?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/mainframe-and-cloud-is-there-room-for-only-one</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/mainframe-and-cloud-is-there-room-for-only-one</guid>
         <pubDate>August 7, 2018</pubDate>
         <description>With the summer in full swing, you've probably enjoyed a few evenings watching the latest movies in theaters or online. I was recently reminded of a much older big-screen scene: a dry, dusty street in the Old West with two cowboys facing each other, hands over their holsters. One drawls out the iconic line€¦ &quot;This town ain't big enough for the both of us!&quot; This scene comes to mind now because it's still happening€¦ or at least some companies think it is. In a showdown being played out between mainframe and the cloud-where some think there's room for only one platform in the streets of global business. But, in the final scene of this Old-West-style story, you find out: &quot;Big Iron&quot; and &quot;The Cloud&quot; aren't sworn enemies. Instead, they're partners (dare I say pardners); in fact, working together, they can achieve so much more! Breaking the Myth: How Platforms are Working Together The proliferation of cloud is old news now. We mainframers recognize the growth and value of the cloud compute infrastructure. After all, according to Forbes, spending on cloud computing has grown more than 4 times the rate of IT spending since 2009. In the recent 2018 State of the Open Mainframe Survey Report, a study CA Technologies engaged in as part of our membership in the Open Mainframe Project, the vast majority of survey respondents, however, said that they consider the cloud an augmentation to (rather than a replacement for) the mainframe. In fact, respondents didn't consider cloud environments as securable, nor as great a value for the cost, as the mainframe. In an ideal world: businesses leverage the best of both computing worlds to create a hybrid, multi-cloud environment, thereby optimizing their systems to achieve digital transformation. In support of this approach, our teams here at CA</description>
      </item>
      <item>
         <title>Achieve Faster Time To Value with New Cluster Management Console in DX APM</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/achieve-faster-time-to-value-with-new-cluster-management-console-in-dx-apm-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/achieve-faster-time-to-value-with-new-cluster-management-console-in-dx-apm-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>August 20, 2019</pubDate>
         <description>Today’s APM solutions often require a vast amount of time and effort to implement – especially for enterprise companies who are monitoring a large number of applications. As businesses are focused on digital transformation, traditional solutions have become a bottleneck by slowing down the on-boarding process, making it difficult to scale-out while creating a lot of administrative overhead. To combat these challenges, DX Application Performance Management is now based on a modern microservices distributed cloud architecture which provides customers with a variety of benefits – one being quicker-time-to-value. DX APM now includes a new cluster management console which allows users to quickly and easily configure, manage and scale their APM infrastructure. In this admin console, you will be able to see an overview of all services and tenants in one place. You can also perform the following tasks: Onboard and create new tenants in a matter of minutes Quickly increase the size of tenants to scale based on growing requirements Upgrade tenants in just one click View utilization of the hardware resources along with metrics for all services in one single place How Customers Are Benefiting Our customers are already seeing great benefits from leveraging this new feature: Large Telecommunications Company Gains Complete Visibility Into Their Environment Before adopting DX APM 11, this client was struggling to manage their large environment and was unable to keep track of the number of clusters running and where they were located. Due the size of their environment, updating the infrastructure was a very laborious and time consuming process. Now with DX APM 11 they have not only gained complete visibility into their environment in one single UI – they can also perform update with just one simple click. Drastically changing how they manage their APM infrastructure. Large Global Automaker Reduces Resource Costs This</description>
      </item>
      <item>
         <title>Developers Engage Warp Drive with the Original DevOps Solution</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/developers-engage-warp-drive-with-the-original-devops-solution</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/developers-engage-warp-drive-with-the-original-devops-solution</guid>
         <pubDate>October 24, 2017</pubDate>
         <description>A Playbook for Modernizing the Mainframe, Part 3 In part 2 of our playbook, Modernizing Development on Mainframe, my colleague Sujay Solomon outlined the key objectives for supporting developers towards a successful modernization effort. Developers desire choice and flexibility in applying their highly coveted skillset, and will thrive with managing legacy applications when they are empowered to use preferred, best-in-class tools, and moreover, treat the mainframe as any other development platform without having to learn its specificities. This group is also being influenced by organizational changes underway in the line of business to transition to agile development and increase business agility. Developers, along with their counterparts like quality assurance engineers, are breaking out of siloes and merging into scrum teams. At the same time, the tools and processes they wield are being comprehensively reevaluated by DevOps architects, IT Operations executives and more - against measures of risk, effectiveness and cost. With such a myriad of KPIs at play, identifying the sweet spot for developer tooling might seem unattainable. In part 3, allow me to share how the dream is well within reach for the many customers I engage on a regular basis. Introducing the original DevOps solution At the heart of most mainframe organizations sits CA Endevor® Software Change Manager. The product name is actually an acronym, ENvironment for DEVelopment and OpeRations; a legacy of pioneering innovators with far-reaching aspirations that we now call DevOps. It is the tool of choice for CA Technologies €“ managing, securing, and deploying millions of our mainframe software assets in enabling us to be responsive to customer needs and agile in meeting the ever-changing demands of Digital Transformation Not surprisingly, as a DevOps solution purpose-built for flexibility, CA Endevor SCM is well-suited for enabling an organization to meet the measures of risk, effectiveness and</description>
      </item>
      <item>
         <title>The Art of Rollback, Part 2</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/the-art-of-rollback-part-2</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/the-art-of-rollback-part-2</guid>
         <pubDate>September 9, 2018</pubDate>
         <description>Defining object and component types, and which rollback strategy to select I hope you are enjoying this blog series on rollback. If you haven't yet had the chance to read part one, please go back and find it here, as it will really help you with this next installment. In order to better understand the challenge of software delivery, I would like to introduce the component and object types involved, their respective properties and how they combine. Object Types Objects in descending size order: Information system: Do you really want to deploy or roll back the entire information system? If yes, you need disaster recovery! Release: A release, as defined by CA Continuous Delivery Automation, is the biggest object you can deploy in one click. A release is a group of applications that need to be deployed together for different reasons, usually because there are dependencies between them or because they share a common calendar or release window. Deploying a release means deploying all of its applications packages, sequentially or, ideally, in parallel. Application: An application is a group of components. It is a service or a tool that can be used on its own. That doesn't mean there can't be integration with other tools and dependencies, but it does mean that if the integrations are not running, users can still use the application on its own (there are some exceptions to this definition of course). Deploying an application means deploying all of its components packages, sequentially or, ideally, in parallel. Component: A component is the smallest piece of software you can deliver and track using version numbers. Deploying a component package means executing a deployment workflow made of jobs on target machines. Job: A job is a basic action, performed on a target machine. A job can have its</description>
      </item>
      <item>
         <title>Hard Work Paying Off - Broadcom Customer Support and the NorthFace Scoreboard Award - Software @ Scale</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/hard-work-paying-off-broadcom-customer-support-and-the-northface-scoreboard-award-software-scale</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/hard-work-paying-off-broadcom-customer-support-and-the-northface-scoreboard-award-software-scale</guid>
         <pubDate>August 20, 2019</pubDate>
         <description>
Recently, we received external confirmation that all our hard work in Broadcom Mainframe customer support is paying off. You may have heard of CRMI (Customer Relationship Management Institute) who are the leaders in studying and promoting the highest customer experience. Recently we announced that CRMI has named us a recipient of their 2018 NorthFace Scoreboard Award. This award is based on the results of actual customer satisfaction surveys including Net Promoter Score information. This incredible feat is even more amazing because it’s nothing new to us — our organization has won it for six straight years. And it is thanks to you validating the fruits of our labor are paying off.

For example, you told us you wanted more timely customer support. As such, we’ve focused on timely response and resolution. With respect to initial response, we are meeting our Service Level Objectives over 99% of the time. And, we’re solving your issues faster than ever and have reduced the time to resolution for our mainframe support cases by 15% over the past several years. We were already fast, but we became faster.

So how did we transition into such a successful organization? We started by listening carefully what you wanted from us. We studied what the best practices in customer support are all about. We spoke to industry analysts. And then we applied what we learned over a challenging seven-year journey. In the next series of blogs, I want to share the values we adopted, the steps we took. All focused on serving you better, one case at a time.

</description>
      </item>
      <item>
         <title>Agile and DevOps with Mainframe teams - Throw the book away! - Software @ Scale</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/agile-and-devops-with-mainframe-teams-throw-the-book-away-software-scale</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/agile-and-devops-with-mainframe-teams-throw-the-book-away-software-scale</guid>
         <pubDate>July 24, 2019</pubDate>
         <description>Well, throwing books away might be extreme, but so is implementing everything a book or an expert says, word for word, especially when the author has no knowledge of your organization. Agile and DevOps are two of the biggest buzz words in the industry, synonymous with digital transformation. An organization not already practicing Agile and DevOps or not transforming itself to adopt them can be seen as problematic. This frequently leads to an organization-wide transformation program simply as checkbox exercise, implementing processes and tools in a uniform manner rather than leading a digital transformation to fit the needs of each team. Large scale change initiatives supposedly often fail because human beings are resistant to change. But we humans make changes in our lives daily, so how can we be resistant to change? More likely, the reason is that many of these initiatives fail to address the key concerns of individual employees and teams. Often only the benefits to the organization as a whole are clearly communicated. A 2016 Coleman Parkes study of 1,770 senior enterprise IT and business executives found that adding DevOps to Agile practices improved new business growth by 63% and DevOps speed to market by 42%. Additionally, more than 80% of enterprises that have embraced digital transformation and adopted Agile and DevOps practices saw an improvement in customer experience. These statistics tell why digital transformation matters to an organization, but not how it matters to individual employees. The next error that organizations make is to select methodologies that all teams will use, thus assuming that one size fits all. When CA Technologies (now part of Broadcom) embarked on an Agile transformation in our Mainframe Business Unit in 2013, we made this mistake too. We mandated that all teams adopt Scrum with 2 week sprints and later mandated</description>
      </item>
      <item>
         <title>How to architect a robust delivery pipeline for cross platform DevOps</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/how-to-architect-a-robust-delivery-pipeline-for-cross-platform-devops</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/how-to-architect-a-robust-delivery-pipeline-for-cross-platform-devops</guid>
         <pubDate>October 31, 2017</pubDate>
         <description>A Playbook for Modernizing the Mainframe, Part 5 In my last blog from the playbook Modernizing Development on Mainframe, I described the key criteria for businesses to hire and retain the best development talent, and the mindset needed to support these developers towards a successful modernization effort. Businesses who succeed allow their developers to use their preferred, best-in-class tools, and moreover, empower them to treat the mainframe as any other platform without having to learn its specificities. In this part, I'll discuss how to extend that mentality to another key persona, DevOps architects, and how application delivery pipelines must be rearchitected so that mainframe development is not perceived as a bottleneck to Digital Transformation. Building a robust pipeline for Digital Transformation is quite the challenge. For one, DevOps architects must contend with the technical complexity of precisely orchestrating activities across platforms for effective multi-modal development, test and delivery. More critically, pipeline activities must be managed against an array of seemingly conflicting agendas from organizational groups undergoing their own internal transformation towards agile development and greater business agility. Simply building a pipeline that can functionally orchestrate across platforms is only table stakes. Businesses differentiated by effective Digital Transformation use it to also accomplish their strategic goals of mitigating risk, amplifying performance of teams in development, test, operations and security, and managing against increasing cost pressures. The answer starts with choice To be clear, empowering development teams with the flexibility to choose best-in-class tools is still important, especially as new-to-mainframe developers seek to quickly hit their peak performance and take on mainframe tasks that are increasingly backlog priorities. Enabling choice is also essential for the welfare of DevOps architects. With Project Brightside, a DevOps architect is empowered to: Enable continuous integration for a cross-platform application that may include amongst other things, distributed</description>
      </item>
      <item>
         <title>CA APM Middleware Transaction Enablement with Nastel AutoPilot Insight</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/ca-apm-middleware-transaction-enablement-with-nastel-autopilot-insight</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/ca-apm-middleware-transaction-enablement-with-nastel-autopilot-insight</guid>
         <pubDate>May 17, 2018</pubDate>
         <description>Middleware technologies are used to simplify communications between applications. Unfortunately, middleware is often a black hole for production support who see messages enter it, but never come out, making it difficult to troubleshoot application performance problems.

CA Application Performance Management (CA APM) is now enabled by the Nastel AutoPilot Insight solution to eliminate the - middleware black hole' and extend monitoring capabilities for an end to end transactional view across Middle Tier architecture components like IBM WebSphere MQ and IBM Datapower.

Correlating transactions through Datapower with in detail transaction handling provides us the key insights into the performance and health of each individual transaction and sub flows through a DataPower appliance.

Let's have a look at a very basic example of a typical application with transactions across multiple tiers.




Our unique integration allows you to monitor all components in the system by directly identifying the transaction and health status of the Datapower appliance.

The experience of the Datapower appliance is showing some problems and the right panel already detected a problematic transaction blaming the Mediator component listening on the reply queue as a root cause.



CA APM in combination with Nastel AutoPilot Insight automatically discovered the transactional map in CA APM Team Center.




The Datapower transaction is discovered and by expanding the &quot;Middleware Communication&quot; section, we get a detailed transactional picture of how the system leverages all MQ PUT and GET transactions on individual queues.



All of this information is made possible by the End to End correlation transaction traces.



Interested to see more? Watch our live demo presentation recorded live at CA World 2017 in Las Vegas.  
 

 

		
			                    
								
			                    
			                    
			               

 
</description>
      </item>
      <item>
         <title>Big Data, Big Regulation, and Big Iron</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/big-data-big-regulation-and-big-iron</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/big-data-big-regulation-and-big-iron</guid>
         <pubDate>January 8, 2019</pubDate>
         <description>Let's start with a few statistics. There were over 53,000 security incidents in 2017 with over 2,200 of those identified as confirmed data breaches. Companies experiencing a data breach took an average of 197 days to detect the breach. The cost of non-compliance is 2.71 times the cost of compliance. Not exactly a best-case scenario. So, what does this mean for organizations today? Stuart McIrvine, Director of Product Management at Broadcom, sat down with Dez Blanchfield to discuss just that. Stuart is an industry veteran with a background in hardware, software, and operations management. In his current role, he focuses on helping Broadcom's customers protect data and keep pace with ever-evolving IT and data privacy regulations. Maintaining Customer Trust It all comes down to a simple truth: Your customers will not do business with you if they do not trust you. 65% of consumers lose trust in a breached company, and over 30% of consumers discontinue their relationship with a breached company. Stock prices decline after a breach (though those with a stronger security posture recover over 12 times faster). The cost of acquiring a client is high; the cost of re-acquiring of a client is almost immeasurable. Clearly, a company’s success is heavily dependent on their ability to prove themselves as a trusted institution, and that trust goes beyond data privacy. It includes trusting the accuracy of the information you provided, the strength of your security measures, and the expectations you set with respect to the customer journey and experience. Successfully establishing – and maintaining – this trust with your customers requires comprehensive enterprise data protection. In the Era of Big Data and Big Regulation In today’s era of Big Data, Big Regulation, and Zero Trust, enterprises are increasingly focused on establishing and enhancing their security and compliance strategies</description>
      </item>
      <item>
         <title>Solid Future for AutoSys</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/solid-future-for-autosys</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/solid-future-for-autosys</guid>
         <pubDate>January 15, 2019</pubDate>
         <description>V12 shows ongoing commitment to the AutoSys community I was lucky enough to sit on a briefing from Dan Shannon, the Product Manager for CA Workload Automation AE (AutoSys). I must say, I have been in the workload automation business for nearly twenty years and ,boy, was it fascinating. The amount of investment that is being made in the product we all know and love as AutoSys must be huge. In a blog I cannot give away everything on the roadmap but V12 is going to be exciting, below are some of the items that I found most interesting. We started with web-services. We know that today most things are based on web-service integrations and automation needs to be able to seamlessly control that world so we can easily automate more for the business. AutoSys has supported this for a very long time - both SOAP and RESTful variants. But now you can automatically extract and pass information from call to call so we can really consume these services quickly and, most importantly, securely €“ watch out specifically in the roadmap for this section as I think we will all be using it very soon. But it is not always the new that catches the eye €“ with automation we want more of the business to be able to consume it, to use it, to gain value from automating the business. Well again, lots of work has been done to improve the efficiency of WCC not only for our power users - it is great for them €“ but also so we can have many more casual users that can easy adopt and consume automation through AutoSys. I have worked in software for a long time and there is always the background noise that the vendor is going to invest</description>
      </item>
      <item>
         <title>What to Automate and What Not to Automate</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/what-to-automate-and-what-not-to-automate</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/what-to-automate-and-what-not-to-automate</guid>
         <pubDate>December 6, 2017</pubDate>
         <description>As automation becomes increasingly intelligent, knowing how to effectively apply it can be easier said than done. The benefits of automation are well documented; it increases productivity, cuts cost and minimizes errors. It eliminates repetitive manual tasks, freeing us up to be more innovative. By that logic, attempting to automate everything possible is surely a sensible €“ even feasible €“ goal? In a word: no. Consider this your short guide as to what to automate and what not to automate. What to Automate As we know, automation is the driving force behind continuous delivery and agile practices. It's helped change our digital landscape and is shaping businesses into the Modern Software Factories the application era requires. Automation's benefits can be applied to just about any department within an enterprise; from HR to Accounts, Dev to Ops €“ even the mailroom. However, certain processes are more suited to automation than others, and what to automate depends on certain factors. There are telltale signs to watch out for that indicate a process is primed for automation. Medium and high volume Workflows vary dramatically in size. They can consist of simple processes which are composed of few steps, to processes requiring dozens, if not hundreds of items. When we think about workflows with minimal steps or items, we should ask €˜does it make business sense to automate this process?' Conversely, processes with medium and high volume items are clearly business-pivotal processes, primed for automation. Manual completion requires three or more users Generally, if a repeatable task involves three or more people, the likelihood is that it would be more efficient if it was automated. There is less chance of a communication breakdown, making it more secure and more accurate. Furthermore, by automating such routines, you'll free up the man hours of at least</description>
      </item>
      <item>
         <title>Why the Time is Now to Modernize Mainframe Development</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/why-the-time-is-now-to-modernize-mainframe-development</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/why-the-time-is-now-to-modernize-mainframe-development</guid>
         <pubDate>October 17, 2017</pubDate>
         <description>A Playbook for Modernizing the Mainframe, Part 1 Earlier this year, my colleague Sreenivasan Rajagopal blogged on &quot;Cloud comes to Mainframe,&quot; highlighting the incredible opportunities for mainframe if businesses could manage the platform with the same agility as the typical &quot;cloud experience.&quot; This vision resonates incredibly well with my engineering team and our ongoing work to design a DevOps solution for our customers, which happens to be another popular topic that also brings the promise of greater business agility. Our goal is to bring both the cloud experience and DevOps to mainframe, and to revolutionize mainframe development and operations. I am incredibly excited to be sharing the journey of our team, so stay tuned over the coming months as we reveal piece by piece the playbook to modernize development on mainframe. Voice of the Customer All great design begins with the voice of the customer. Our customers told us they had three key objectives when enabling modernization: Make mainframe development attractive for the new generation of developers: Many organizations are facing a generational shift in their workforce €“ mainframe experts are retiring, ceding responsibility over mission-essential applications to a new generation of developers. These new developers have limited interest in becoming experts on the mainframe, and are even less inclined to adopt historical practices established by their predecessors.Insight: Businesses must therefore rethink application development for mainframe. Make mainframe development a part of the enterprise DevOps initiative: Line of business teams who are increasingly adopting DevOps principles are struggling to integrate mainframe development into their existing delivery pipeline, leaving mainframe development as a critical bottleneck.Insight: Businesses must therefore reconfigure their DevOps toolchains to support mainframe applications. Make nearly €˜zero touch' and €˜zero cost' development/test environments on the mainframe: Creating dev/test environments is a complex, time-consuming process requiring dedicated support from IT</description>
      </item>
      <item>
         <title>The Move to a Common Software Maintenance Approach</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/the-move-to-a-common-software-maintenance-approach</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/the-move-to-a-common-software-maintenance-approach</guid>
         <pubDate>April 9, 2018</pubDate>
         <description>CA &amp; IBM Team up to deliver simple software maintenance

Recently at SHARE Sacramento, CA and IBM announced a groundbreaking partnership to simplify z/OS software maintenance for system administrators, system programmers and security administrators. Under this partnership, CA will give to IBM key service capabilities (installation of PTFs) from its Mainframe Software Manager (MSM), to become part of IBM z/OS Management Facility. This cooperation between CA and IBM provides a foundation that IBM, CA, and ISV's including BMC and Compuware can use to provide further improvement and simplification to the installation process of PTF's for z/OS software.
</description>
      </item>
      <item>
         <title>16G Fibre Channel: Bigger and Badder FC For Virtualization, Cloud and Database Applications</title>
         <link>https://www.broadcom.com/blog/16g-fibre-channel-bigger-and-badder-fc-for-virtualization-cloud-and-database-applications</link>
         <guid>https://www.broadcom.com/blog/16g-fibre-channel-bigger-and-badder-fc-for-virtualization-cloud-and-database-applications</guid>
         <pubDate>January 19, 2012</pubDate>
         <description>Most virtualization deployments rely on storage area networks (SANs) for flexible shared storage solutions to meet mobility, performance, scalability and efficiency requirements. As many data centers take the next steps in virtualizing big I/O applications, like databases, and move to more scalable private clouds, storage networking has become the primary bottleneck for Quality of Service (QoS) and scalability. The new Emulex LightPulse16G Fibre Channel (16GFC) Host Bus Adapters (HBAs) fix that bottleneck, enabling the best QoS for the highest virtual machine (VM) density with the fewest ports and cables and the lowest power footprint. Additionally, the entire SAN fabric benefits from higher availability and reduced power requirements leveraging a faster HBA. Because of better performance as well as streamlined management and backward compatibility, Emulex 16GFC HBAs is the best solution for virtualized environments. Here is what you can expect when upgrading to Emulex 16GFC HBAs: 5x the IOPS Twice the data throughput Cuts application I/O response time in half Up to 4x the IOPS for typical 4K/8K I/O block database applications 3x the IOPS performance per watt Maximum VM density with increased N_Port ID Virtualization (NPIV) virtual ports (vPorts) True cloud scalability, with support for up to 255 virtual functions and 1024 MSI-X and 8192 logins and open exchanges for maximum VM density—up to 4x more than other 16GFC adapters Unmatched native manageability with Emulex OneCommand Manager for VMware vCenter – enables adapter management directly from the vCenter console, delivering 2x adapter management functionality and taking half the time to install and manage compared to other adapters End-to-end data integrity with BlockGuard™ hardware offload – supports the T10 Protection Information (T10-PI) standard to protect against silent data corruption, without the 30-40% performance tax incurred by other firmware-based T10-PI solutions If you’d like to learn more about 16GFC technology, join our</description>
      </item>
      <item>
         <title>Everything You Always Wanted to Know About Hadoop Automation (But Were Afraid to Ask)</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/everything-you-always-wanted-to-know-about-hadoop-automation-but-were-afraid-to-ask</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/everything-you-always-wanted-to-know-about-hadoop-automation-but-were-afraid-to-ask</guid>
         <pubDate>April 8, 2019</pubDate>
         <description>I’m sure that every day you hear the same old story, Big Data could be a game-changer for your organization. A unique opportunity to harness massive volumes of structured and unstructured data, make faster decisions and deliver highly personalized customer experiences on a massive scale. Right? And now you’re probably faced with that yellow elephant in the room, at the center of developer’s conversations, infiltrating your meetings, charming your boss and hiding between your budget lines – this is Hadoop and obviously, you cannot avoid it. So what is Hadoop exactly? This is probably the question you don’t even want to ask your geek friend, as you’re sure the answer would either generate even more questions or bring on a light migraine. So, how to define Hadoop in simple non-geeky words? Well, let’s say it is a solution to common database problems you’ve been facing more and more frequently such as data that cannot fit into your tablespaces, SQL statements that take ages to complete or database schema that’s changing all the time. In fact, Hadoop is an open source framework that has been designed to address the three Vs, those three main challenges of Big Data, known as Volume, Velocity, and Variety. Hadoop starts to make sense when traditional relational databases struggle to scale. How does Hadoop work? The principle of operation of Hadoop is pretty simple. The infrastructure applies the well-known principle of grid computing, which means dividing data storage and process execution on multiple nodes or clusters of servers. Imagine you have a file way larger than your server capacity. You cannot store that file, right? But Hadoop lets you store files bigger than what can be stored on one single server by splitting data into chunks that are distributed on multiple nodes. So you can store</description>
      </item>
      <item>
         <title>Self driving continuous delivery?!</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/self-driving-continuous-delivery-ca-automation</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/self-driving-continuous-delivery-ca-automation</guid>
         <pubDate>August 18, 2019</pubDate>
         <description>Your journey to continuous delivery is a lot more closely related to autonomous driving then you may think. in fact, just like continuous delivery, self driving cars are all about levels of automation. And intelligence you may ask? Intelligence is a level of automation in and of itself! Self driving cars are all the rage these days, but they are not really here yet. If you dig a little deeper into it you”ll quickly find out that the NHTSA (yes that exists, its the National Highway Traffic Safety Administration board in the USA) has adopted what they call the 5 levels of automation ( 6 levels actually if you consider level zero “no automation” as a level), anyway – you can read about it on the NHTSA website here. The 5 Levels of Automation in Cars. The interesting thing to notice about the NHTSA stairway is that they actually don’t really mention artificial intelligence or deep learning at all, instead they make their distinctions based on the “level of automation” that a vehicle is capable of. So for example, level 2 is defined as “partial automation” where “the vehicle has combined automated functions … but the driver must remain engaged with the driving task and monitor the environment at all times”, at level 3 the driver is still responsible for the driving and must remain alert all thought the vehicle practically handles all the driving (Some modern day cars like the Tesla and other are considered to be almost level 3), At level 4 the car can drive autonomously under certain conditions (e.g., highways, specific weather conditions etc..) and at level 5 the driver is completely optional, or indeed, not even given an option to control the vehicle. Where is the intelligence? When you begin to further explore these increasing</description>
      </item>
      <item>
         <title>Announcing the release of CA Automic Applications Manager 9.3.0</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/announcing-the-release-of-ca-automic-applications-manager-9-3-0</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/announcing-the-release-of-ca-automic-applications-manager-9-3-0</guid>
         <pubDate>April 4, 2019</pubDate>
         <description>What's new in Applications Manager Having spent half of my automation career working with this product it gives me great pleasure to announce that the 9.3.0 release was made available for download on 21st March 2019. You can download this great new version at the Automic Download site with all the associated documentation. Many of you contribute to the success of this product not just by using it but also suggesting ways to improve the solution. Please continue to post and vote for ideas within the community site as we want to drive investment in the product exactly where you need it. At the end of this blog is a link to the community site for CA Automic Applications Manager. So, why should you upgrade to this latest release. We want to give you the flexibility that you need when running our solutions. So we have added full support for the OpenJDK version of Java. As Oracle moves to commercial versions of Java we wanted to give clients the options to adopt whichever Java variant they need for their business. Similarly, customers want to embrace the latest technologies from Oracle, so we have added full support for Oracle 18c in this release. Eliminating any barriers or challenges to adoption for clients as they advance their use of Oracle technologies. Beyond technology support, we are always looking to simplify using our solutions. The licensing keys within CA Automic Applications Manager were a challenge to some customers. So we have removed them, from this version forward you will never have issues around deploying or having expired keys across your system. For those of you that have had this in the past I am sure you will be celebrating. Many areas of the product were enhanced either through fixing known problems or by</description>
      </item>
      <item>
         <title>Announcing Advanced API Deployment, Improved User Experience and Powerful Analytics for Layer 7 API Management - Layer 7® API Management</title>
         <link>https://www.broadcom.com/sw-tech-blogs/api/announcing-advanced-api-deployment-improved-user-experience-and-powerful-analytics-for-layer-7-api-management-layer-7-api-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/api/announcing-advanced-api-deployment-improved-user-experience-and-powerful-analytics-for-layer-7-api-management-layer-7-api-management</guid>
         <pubDate>December 18, 2017</pubDate>
         <description>Now Available: Layer 7 API Management SaaS and Layer 7 API Developer Portal Release 4.2 We are very proud to announce the latest improvements to Layer 7® API Management SaaS and the Layer 7® API Developer Portal in version 4.2, now generally available. We continue to build upon our vision of providing a technology platform for API and architecture teams to effectively implement a full API lifecycle strategy and realize the most value and potential from their API programs. Now when you subscribe to or buy to Layer 7 API Management, you can look forward to the following improvements: Automated and Federated API Deployment Across Environments Easily deploy APIs across environments, such as development, testing and production, or tailor your API products by region and deploy to different geographies. You can control how an API is deployed: Automatically upon publishing On-demand where API deployments are triggered by our deployment API Or implement a workflow that uses a scripted approach to integrate with your CI/CD pipeline Intuitive and Easy to Use Experiences for Administrators, Publishers and Developers Overall improvements: Streamlined installation for private cloud environments using Docker Easily customize the look and feel of your API management interface with themes and optionally for each developer organization you create Simplify user login to CA API Management with Single Sign-On using a variety of enterprise identity providers like LDAP or CA Single Sign-On or use built-in admin or developer account registration Customize Appearance for Each Development Organization Customization Options API publishing experience: Importing your API Swagger definition file makes it fast and easy to publish a new API with a few clicks API owners or developers can easily apply powerful and customizable policy templates upon publishing APIs to enforce SLAs Easily target how your API is deployed publicly or privately and to select</description>
      </item>
      <item>
         <title>Major League Application Delivery</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/major-league-application-delivery</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/major-league-application-delivery</guid>
         <pubDate>November 5, 2018</pubDate>
         <description>What does baseball scouting have to do with enterprise complexity? In 2002, the Oakland Athletics Major League Baseball team permanently changed the way baseball scouting and analysis are done. Implementing what has come to be known as the Moneyball strategy, the team reached the playoffs for consecutive years on a relatively meager budget. Taking a highly metrics-based approach and applying analytics in a way that enabled the team to sign players overlooked by the league’s bigger teams, Oakland strengthened their team by finding and addressing weaknesses that other teams simply couldn’t see, with players that were often considered worthless. Their success has seen other baseball teams adopt the approach, while in Europe, soccer teams have begun to adapt similar strategies for their sport. In business, like in sports, every organization has its own challenges, strengths and weaknesses. The larger the enterprise, the more idiosyncrasies will be encountered, adding to the complexity of finding solutions. Yet any challenge encountered often stems from the same root cause: a lack of visibility and analytics. In IT, this means not knowing where time is spent delivering software changes, what is causing delays and what value is delivered at the end of the day. And since this lack of clarity could be anywhere in the business, so indeed could the bottlenecks that are holding up a release. The waters might be muddied further by inconsistent processes between stages of a release pipeline. This could be due to any of the following factors (or a combination of them): technical dependencies between isolated silos, budget constraints and/or technical limitations that lead to gaps and malfunctions in the processes. Consequently, some parts of the application will not pass through the pipeline, and it is not always clear where or why something has been lost. With a lack of</description>
      </item>
      <item>
         <title>A Bird's Eye View For Your Network Monitoring Application</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/a-bird-s-eye-view-for-your-network-monitoring-application</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/a-bird-s-eye-view-for-your-network-monitoring-application</guid>
         <pubDate>March 5, 2018</pubDate>
         <description>Did you know? CA Spectrum has a native global view for site status health. While NetOps needs a network monitoring application to get right to the heart of an outage, triaging down to a device or a port on that device; it is always a good idea to have a 10,000 ft view of operations as well. CA Spectrum has a native global view of operations called the Geographical Information System (GIS) view. The typical use for network monitoring application GIS views are as a display on the Network Operations Center (NOC) wall for a quick at-a-glance status when staff are coming on/off shifts and for NOC managers and visitors. The following is a CA Spectrum network monitoring software example that I mocked up: Out of the box, the GIS view only works with devices that are enable for SNMP monitoring and leverages the Location field with an attribute ID of 0x23000d to obtain the GIS address information. While showing single devices is useful if you just want to understand the WAN status, I have found most customers want to represent the status of sites or &quot;site services&quot; via this GIS map. This can be achieved with the configuration settings covered below in the section titled Display of device status on the GIS map with OOTB functionality. I believe the best use of the GIS map is to display site services. What I mean by a site service is a business service that is dynamically managed by CA Spectrum Service Manager for each physical site (identified by &quot;Site&quot; Global Collections using the TopologyModelNameString attribute). The reason site services are the best item to display is it can contain all the devices at the site and you can control which alarms affect the service using service policies to manage the service</description>
      </item>
      <item>
         <title>Data-Centric Security: Stop Treating the Mainframe Separately</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/data-centric-security-stop-treating-mainframe-separately</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/data-centric-security-stop-treating-mainframe-separately</guid>
         <pubDate>May 16, 2017</pubDate>
         <description>CA Technologies unveils the enterprise capabilities in CA Data Content Discovery Historically, mainframe security has been managed separately from other platforms in a modern data center. But in today’s application economy, that is not viable. The mainframe is now a connected entity in the business and is critical to both the traditional high-volume transactional services as well as to newer services and mobile apps. Hence let’s stop treating the mainframe differently – security professional both mainframe and distributed speak the same language and share the same concerns; it’s just that the policies and processes have been distinct. It’s vital now more than ever to view data-centric security holistically, rather than by platform. If you take a fragmented approach to security or worse treat the mainframe as a blackbox or fortress you introduce vulnerabilities into your enterprise. Data is always on the move so ultimately your overall security is no greater than your weakest security policy. Security challenges and complexity The holy grail of every organization is the security of corporate data: the protection of your most sensitive business assets. Yet the security challenges surrounding corporate data are manifold: While worldwide 80% of all transactional data still reside on mainframes and that 90% of all credit card transactions pass through this modern IT-platform, there is a disturbing inability to actually locate all of that sensitive and regulated data. What you cannot locate, you cannot protect. The mainframe brings significant business value, but it also brings new threats. This level of risk is further increased when organizations are unsure whether the mainframe is being managed according to set policy. There is a decided skills gap with regard to mainframe expertise that affects the ability of companies to ensure the security of their data. Because the connected mainframe touches many types of applications,</description>
      </item>
      <item>
         <title>Bring CA Operational Intelligence Into Your Citrix Monitoring</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/bring-ca-operational-intelligence-into-your-citrix-monitoring</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/bring-ca-operational-intelligence-into-your-citrix-monitoring</guid>
         <pubDate>April 15, 2019</pubDate>
         <description>Citrix Virtual Apps and Desktops (formerly XenApp and XenDesktop) is a popular solution that accelerates enterprises to deliver virtual applications and desktops remotely and enables end users to access them from anywhere on any device. With its FlexCast Management Architecture (FMA), it provides a comprehensive platform for applications and desktop delivery, mobility, services, flexible provisioning and cloud management. While Citrix multi-tier architecture provides greater flexibility and interoperability between different Citrix services, it also brings a distinctive set of challenges to enterprises that leverage this technology. Typical Questions That a Citrix Administrator Often Asks: How is my end user experience getting affected? Can end users login quickly? How can they access their profiles quickly? How can I minimize latency in terms of accessing virtual desktop infrastructure? What is the traffic volume on my VPN gateway? How is it affecting round trip response time? What is the health of the core infrastructure? Solving these challenges requires monitoring various components involved in the Citrix Virtual Applications and Desktops deployments. CA Unified Infrastructure Management (CA UIM) provides monitoring capabilities for all the key components using a set of monitoring probes. CA UIM in conjunction with CA Operational Intelligence enables IT operations teams to proactively foresee and resolve potential performance bottlenecks that might arise across different layers within Citrix deployments. CA UIM Monitoring for Key Citrix Infrastructure Components Below are eight infrastructure components that are found in a typical Citrix Virtual Delivery Agent (VDA) deployment. Each section details how CA UIM monitors these components through various probes. Site &amp; Delivery Controllers: Controllers are one of the core components in the Citrix VDA deployment; they communicate with the underlying provisioning layer to distribute applications and desktops, authenticate and manage user access, broker connections between users and their desktops and applications, optimize use connections, and load-balance these</description>
      </item>
      <item>
         <title>Oracle PeopleSoft Makes the Grade</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/oracle-peoplesoft-makes-the-grade</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/oracle-peoplesoft-makes-the-grade</guid>
         <pubDate>September 11, 2018</pubDate>
         <description>Workload automation brings efficiency to the University of Colorado Running a large university system is complicated-from managing human resources, to maintaining university property, to a variety of other day-to-day processes-and even more so when connecting and coordinating these services across multiple campuses. Financial services at many large universities not only manage payroll for hundreds of faculty and staff, but also collect tuition payments from and distribute financial aid to thousands of students-or even tens of thousands of students, in the case of some of the largest universities in the United States. For institutions like these, using the right workload automation solution can boost efficiency, save time and resources and make it easier to excel. Top of the Class At the University of Colorado, the crucial business processes underlying HR, payroll and admissions across four campuses are powered by Oracle PeopleSoft, but the length of time required to process all of the data for these multi-campus systems has proved challenging. Some manual processes required two of the university's employees to spend two full workdays on them each week. After implementing, an Oracle-validated integration for PeopleSoft that runs on CA Automic Workload Automation, the time and resource savings have added up quickly. This solution automates 800,000 processes each month for the university, has helped them cut over 20,000 hours of manual activity and has removed 860 hours of delays each month from financial aid processing. Eliminating the need to manually submit jobs and processes into different process schedulers has also freed up staff do the important, innovative work in their job descriptions. The members of the university who work with these systems enjoy the user-friendly interface that the CA solution provides, and the ease of teaching new team members how to use it. In addition to these benefits, CA Automic for Oracle</description>
      </item>
      <item>
         <title>Self-Service: How Automation Helps You Help Yourself</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/self-service-how-automation-helps-you-help-yourself</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/self-service-how-automation-helps-you-help-yourself</guid>
         <pubDate>December 4, 2017</pubDate>
         <description>Traditional self-service approaches have been at odds with our expectations of the digital age. Automation resolves these issues. For a long time, there has been a disconnect between our personal and professional lives. Day-to-day, we demand efficiency and fluidity. If a process is cumbersome or takes too much time, we abandon it in favor of a more streamlined solution. Time, after all, is our most valuable commodity and we’re certainly not going to waste it. Yet, when it comes to our professional lives, it appears we have time to burn – we most certainly don’t. Historically, self-service has been a stumbling block for many organizations. This shortcoming is even more pronounced as we move further into the digital age. At home, if I were to request a new application on my smartphone, I’d expect it to download instantly. That application should be available in seconds or, at worst, minutes. As users, we’re not prepared to wait days and weeks for our software updates or stability patches. That said, there’s a big difference between downloading a smartphone application and fulfilling the self-service requests of the modern enterprise. Our home computing is, to a point, limited. Although we have myriad connected devices (smartphones, tablets, games consoles, watches and so on), typically they utilize just one or two operating systems. Android, iOS and Windows are among the most prevalent. Developers are optimizing for very specific operating platforms, which have little if any variance in system configuration. When we look at the needs of the modern enterprise, however, we see a much more varied digital topography. Self-Service Hurdles Businesses of all sizes operate under highly heterogeneous environments. Different staff members require different systems, applications and permissions in order to complete their work. It’s a complex environment, with companies often turning to public cloud providers</description>
      </item>
      <item>
         <title>Planning Agile Part 2: Big Room Planning</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/planning-agile-part-2-big-room-planning-rally-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/planning-agile-part-2-big-room-planning-rally-software</guid>
         <pubDate>January 31, 2018</pubDate>
         <description>In this blog, I will dive deep into the week leading up to Big Room Planning and the event itself. Namely, I'll focus on how we set up, what our schedule looks like, what we plan to get out of it, and how we set ourselves up for success. This article is written from the perspective of our engineering department. However, we also hold Big Room Planning events for marketing, sales, leadership, services, and other groups at Rally. So if you're not in engineering, this article is still for you! What is Big Room Planning? Big Room Planning (BRP), sometimes referred to as PI Planning, is a two-day, department, or release train, planning event. The purpose of the event is a collaborative planning process resulting in a committed roadmap for the Planning Increment (PI). At Agile Central, our PIs are fiscal quarters, organized into six or seven two-week sprints, with all teams on the same sprint cadence. The final sprint of the quarter is really two, one-week sprints; one for Hackathon, and one for the Big Room Planning event. Before the Event While the Big Room Planning itself is only a two-day event, teams spend much of the full week on planning related activities. The two days before Big Room Planning, always Monday and Tuesday for us, teams work to understand the priority and break down of features as they relate to our PI goals. Our teams will generally block out three hours each day to: Establish their capacity using historical velocity and team judgement Product/Architecture/Initiative teams will provides a compelling, prioritized list of features with specific defined outcomes Work, to the best possible extent, to do initial story breakdown and any necessary spikes for these features. Alternatively, create spikes scheduled for early in the quarter. Day 1 We open</description>
      </item>
      <item>
         <title>Planning Agile Part I: The Calendar</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/planning-agile-part-i-the-calendar-rally-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/planning-agile-part-i-the-calendar-rally-software</guid>
         <pubDate>January 9, 2018</pubDate>
         <description>As a Scrum Master I get a lot of questions from our customers and other Agilists about how the Rally team does planning. What is our planning process? How do you run Big Room Planning (BRP)? How do you prepare for BRP to go smoothly? What makes a BRP event successful? In this two part series, I will tackle our planning schedule and the work we do leading up to a BRP event to make it successful. Part two will dive deeper into the BRP event itself, what those two days look like, and some of our learnings on how to have the best event possible. How We Operate Our team practices SAFe. As part of SAFe, we plan our work in planning increments (PIs) of one quarter at a time and work in two-week sprints. In addition, we plan as one release train made up of 15 teams across three locations. Work is organized based on our highest priority initiatives to tackle in each PI. We define an initiative as a mid-range (1-6 months) business objective (including architecture or experiments) that delivers value to customers. Initiatives are budget-constrained, not fixed scope, so that we are flexible on how we plan and deliver on the objective. Each initiative has two to four teams coordinating on delivering that body of work. They do this by defining and executing on related features and stories. For more on how we organize and run our development teams, check out this post. The Process Since we work in two-week sprints, it is easiest to break down our planning process into the same time boxes. This helps us keep all of our work and our brains in sync. Sprint 1 Our planning calendar starts during the first sprint of the quarter while our teams are off</description>
      </item>
      <item>
         <title>What is DevOps Culture?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/what-is-devops-culture</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/what-is-devops-culture</guid>
         <pubDate>May 13, 2018</pubDate>
         <description>Everyone everywhere is 'doing DevOps' — but what does that actually mean? DevOps is a concept that's been kicking around for a decade. It is one of those things that everyone talks about but not everyone properly understands or implements. We might hear how it is inherently linked with agile software development; we're told it will dramatically enhance a company's ability to release software quickly and efficiently. We know from its name that it involves some sort of integration between development and operations, and a cursory bit of research shows that it is a culture as opposed to a tool or methodology. So why, in the age of information, is DevOps so often implemented incorrectly? Why are its fundamental tenets so often misunderstood? When broken down, what is DevOps culture and what should it look like? Some Misconceptions of DevOps Culture The concept of DevOps has become unclear as companies misinterpret and merge ideas, and when done incorrectly, DevOps can create resentment, divisions and disharmony-the antitheses of what it is supposed to promote. It is important to strip away a few misconceptions before drilling into what precisely a DevOps culture entails, so let's start with what DevOps isn't: It is not asking a developer to wear the hats of a systems administrator, QA engineer and release manager simultaneously. It is not asking an operations manager to suddenly start heavy-duty coding. It's not a particular toolkit or software product, nor is it about change processes. Instead, DevOps is simply about creating a holistic mindset and collaborative work ethic from all of the departments involved in the software delivery lifecycle. IT Culture Before DevOps Before the advent of DevOps and its rapid adoption in mainstream computing, there were three distinct teams: development, testing and operations. Each had their own interests, goals and</description>
      </item>
      <item>
         <title>Security Operations Center 101: The Mainframe Edition</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/security-operations-center-101-mainframe-edition</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/security-operations-center-101-mainframe-edition</guid>
         <pubDate>March 11, 2019</pubDate>
         <description>What Even Is a SOC? The security operations center (SOC) is the central command center for security data and systems across the enterprise-from cloud to distributed to mainframe. Think Battlestar Galactica's Command Central. ðŸ›¸ The SOC's role in enterprise security software is to: Proactively prevent security incidents. Reduce dwell time and breach impact by quickly detecting and reacting to incidents. Analyze and investigate incidents to identify the source and impact. Help to remediate security incidents as quickly as possible. Report on security incidents for auditing purposes and keep pace with compliance management Quickly share an enterprise's security posture with key stakeholders. This requires a concerted effort around people, data, and processes within an organization. Why Does It Matter? There is a plethora of security-related data in any one enterprise. It is no wonder the standard discovery time for a breach is an average of 197 days. The SOC is fundamental to optimizing and speeding threat detection and remediation. This helps to ensure a trusted customer experience, retain customers, meet regulatory requirements, and prevent expensive data breaches. The main goal: Don't let an event become an incident. How Do You Build It? Is your SOC mature? Are you trying to further develop your SOC? Let's discuss a few key points around SOC strategy: Define business goals and risk preferences that inform the SOC strategy. This will help determine both your foci for data consumption and analysis and the framework for your remediation or incident response plans. Map your SOC infrastructure to regulatory requirements. Compliance is now part and parcel with information security efforts. Build data flows to include all relevant data in the SOC and maintain a clear line of sight across the enterprise. This typically includes network and endpoint monitoring, breach detection solutions, and security information and event management (SIEM)</description>
      </item>
      <item>
         <title>Defining DevOps Maturity</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/defining-devops-maturity</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/defining-devops-maturity</guid>
         <pubDate>November 30, 2017</pubDate>
         <description>For more than a decade, enterprises have been bringing their development and operations teams together. How well are you faring? L.P Hartley famously quipped, 'The past is a foreign country: they do things differently there' - words which resound more than ever as we move further into the digital age. For over ten years, DevOps has been taking the world by storm. It has made organizations step back, evaluate their processes and implement enterprise-wide cultural and infrastructural change. Vendors have followed suit, introducing an abundance of continuous delivery tools to facilitate this step-change for companies at different levels of DevOps maturity. Agility and digital transformation is now the name of the game. DevOps seeks to enable both. It’s fueling innovation, allowing companies to do more with less and enabling enterprises to become built-for-change businesses; traits required of the application economy. As the digital era has bloomed, we have seen organizations begin their digital transformation initiatives – with DevOps at the heart of those journeys. As such, its practices have taken root within organizations of all sizes. However, even purpose built-for-change business – industry disruptors and unicorns – are maturing. It makes sense then, that we are all at varying stages of our digital transformations. DevOps Maturity: What Is It? Becoming a built-for-change business doesn’t just happen overnight. Organizations have realized that digital transformation programs aren’t quick win solutions. They take time to properly implement. As such, businesses and enterprises of all sizes are today at varying levels of ‘completion’ on their journeys and are keen to discover how well they are doing DevOps. Successful implementation is often referred to as DevOps maturity – but how is it measured? DevOps maturity can be measured in four distinct areas: Culture and strategy Automation Structure and processes Collaboration and sharing Culture and Strategy</description>
      </item>
      <item>
         <title>The Evolution of Self-Driving IT Ops</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/the-evolution-of-self-driving-it-ops</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/the-evolution-of-self-driving-it-ops</guid>
         <pubDate>July 10, 2018</pubDate>
         <description>A Practical Look at Machine Learning, Augmented Intelligence, and Automation Self-driving cars are making headlines every day; the future being envisioned as a car that runs itself, maintains itself, sends alerts when help is needed, and prevents accidents. While opinions of a self-driving car vary from excitement about simplifying the daily commute to &quot;no way would I ever put total control in the hands of a machine,&quot; the concept gives rise to thoughts about self-driving data centers. What would they look like and how would they change IT as we know it? Reports indicate that enterprises are losing $21.8 million per year on average in downtime and 87 percent expect this to increase1. For organizations that are trying to manage and optimize increasingly complex hybrid IT environments that span mainframe and multi-cloud infrastructures, could evolving to a self-driven data center provide the keys to driving smarter, faster IT operations and preventing downtime? Augmented Rather than Artificial Intelligence For some, the thought of a self-driven data center conjures up scenes from classic sci-fi flicks like War Games, Terminator, and Tron. But, as Erik Brynjoloffson shares in his Ted Talk, the future shouldn’t be about machines and humans competing against each other, but about how they can work together to achieve business objectives through intelligent automation – they are better together. Self-driving capabilities in IT operations are very real, and are progressing rapidly due to increasing levels of automation coupled with advancements in machine learning and augmented intelligence (AI). It’s important to note that self-driven IT Ops is more about augmenting the operator through data-driven intelligence and automation rather than completely replacing them. AI in this article should be understood as ‘augmented’ intelligence rather than ‘artificial’ intelligence. Stages of Automation To explore the concept of self-driven IT Ops further, we will look</description>
      </item>
      <item>
         <title>Empower Ops into DevOps</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/empower-ops-into-devops</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/empower-ops-into-devops</guid>
         <pubDate>July 8, 2018</pubDate>
         <description>Consistent automation prevents shadow ops from creeping into your IT organization Continuous innovation has become a business requirement. Against this backdrop, application release cycles are measured in days, rather than weeks or months. To keep up, many companies have turned to agile methodologies and DevOps. However, these approaches are hard to reconcile for traditional IT operations whose DNA is all about stability, rather than agility. When IT operations cannot adapt they become a victim of “shadow ops.” Development teams take on the roles of app deployment and management themselves, typically using a large number of disconnected tools that require more training and maintenance. This inefficiency is part of the huge opportunity cost of shadow ops; if developers are focused on maintaining application delivery and stability, they’re not writing code, and the significant expertise that IT operations could bring to the table is wasted if it isn’t used. The bottom line is that software gets delivered faster when IT operations teams are tightly involved in the development processes. This is the beauty of DevOps. Keeping pace with continuous delivery, without letting shadow ops creep into your organization, requires development and operations to implement consistent end-to-end automation across the entire delivery process. However, it is important to treat DevOps as a bottom-up approach, as well as top-down. For a successful DevOps approach to succeed in practice, IT Operations should adopt an “OpsDev” mentality, providing infrastructure and automation services on-demand to the development teams, across all steps of the continuous delivery process—from build to production, through testing. Despite this, traditional automation solutions tend to separate the needs for continuous delivery into distinct categories and tools: automating releases, automating infrastructures and automating application processes—all of which are owned and managed by different teams. And this explains why we frequently see an application successfully tested</description>
      </item>
      <item>
         <title>Microservices: Where Anything is Possible</title>
         <link>https://www.broadcom.com/sw-tech-blogs/api/microservices-where-anything-is-possible</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/api/microservices-where-anything-is-possible</guid>
         <pubDate>September 24, 2017</pubDate>
         <description>How awesome was 1999? We were pre-tech bubble. We were pre-pendulum swinging too far towards governance and compliance and corrupting the tooling that was advancing us. We were pre-closing the doors to innovation in the name of safety. And, the best part, The Matrix was playing in theaters. I promise, I don’t date myself with that movie reference for no reason. I bring up The Matrix because at the end of the movie, Neo (the main character for those still in grade school when it came out) described a world without constraints. “… [I’m going to show you] a world without rules and controls, without borders or boundaries. A world where anything is possible.” This is the tech world today. The promise of The Matrix is being realized by the microservices movement. At this point, the analogy is probably not lost on anyone that, in the purest sense, microservices are built without borders or boundaries. Because of that, microservices are fundamentally (and literally) changing how we view the world around us. Today, the future is being built with more veracity and velocity than ever and the impossible becomes possible if we just say “yes, if”. And, that’s the hard part. “Yes, if” we change our approach to technology. “Yes, if” we challenge what we view as important attributes of an architecture. “Yes, if” we can change our mindset. Microservices: the who I’m not an authority on microservices; but, I do have the very fortunate benefit of working with the smartest, most innovative and creative people in the world whose common goal is to do amazing things for their brands with technology. They have shaped my perspective and fundamentally changed my view of what IT is all about. Who are they? They are from every industry, of every size and they</description>
      </item>
      <item>
         <title>Designing an Agile Approach for On-Boarding New IT Infrastructure Technologies</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/designing-an-agile-approach-for-on-boarding-new-it-infrastructure-technologies</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/designing-an-agile-approach-for-on-boarding-new-it-infrastructure-technologies</guid>
         <pubDate>January 10, 2018</pubDate>
         <description>Welcome to the first in a series about a methodology designed to increase efficiency, optimize productivity and truly take IT Operations to the next stage within the Modern Software Factory. Humbly presented for your consumption will be a set of simple, common sense-based &quot;Best Recommendations&quot; within a framework leveraging CA Unified Infrastructure Management (CA UIM) at the core but exploring when CA brings to bear its full power with the entire Agile Operations stack. Utilizing these key steps and strategy points laid out through the series positions both enterprise and service provider organizations for delivering repeatable procedures, &quot;build once €“ then clone and customize&quot; style capabilities and will completely dovetail at many stages along your maturity progression in fully realizing your Modern Software Factory's potential. The 5 stages are succinctly: Strategize, Execute, Deliver, Grow and Enhance (SEDGE). Like when mastering any advanced concept, the initial steps become formalities once groups understand what pieces are more valuable to their end customers and can scale production dramatically in very short order. Broken down, each stage represents areas where we will explore success stories that I’ve been lucky enough to share in and I’ll give you my takes on why these cases were so successful. Taking into account my more than decade-long life cycle growing up in this new agile world, here’s a sampling of what I’ll be sharing around each of these: 1 – Strategize: Effective planning for success while maximizing “knowing what you know” but more importantly, asking the right questions to ask to find out what you don’t. We’ll delve into the power of putting together the right team, how to plan your architecture, and how to decide which of several paths to success is ultimately the best fit for your goals. Failing to plan leads to failures inevitably and</description>
      </item>
      <item>
         <title>Mainframe - The Cloud Services</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/mainframe-the-cloud-services</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/mainframe-the-cloud-services</guid>
         <pubDate>April 17, 2017</pubDate>
         <description>By Raj Sreenivasan, Senior Director of Product Strategy, CA Technologies Remember this parable? Two cats were fighting over a cake. A passing monkey ate the cake while the cats argued. So both cats ended up with nothing. I've remembered this story when I've witnessed cloud and on-prem IT teams argue that their approach is best. Your business is too valuable and too unique for a binary, winner-takes-all approach. Now there's a third way that offers the best of both worlds for your business: Cloud Services. This approach is especially appealing to businesses that only use their mainframe for selective transactions and processing. Cloud Services: what are they? In a nutshell, Cloud Services combine the reliability, availability, scalability and high-performance computing of an on-premise mainframe, but adds cloud-like &quot;on-demand&quot; availability. Elements of Infrastructure-as-a-Service (Iaas), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) are presented as a single integrated service. And they're available for any business that wants to design, code and deliver applications on a mainframe platform. How do Cloud Services help you? Cloud services deliver value because you'll no longer need to manage the IT hardware environment and tasks that are usually handled by mainframe systems programmers. As a result, your overheads and skills risk reduce significantly. But you can still get what you need to create and maintain a mainframe environment on which you can run your applications. Cloud Services in practice Just as important, you'll retain control of your most valuable mainframe assets, such as source code, databases, data sets and batch schedules. You'll still manage and maintain processes like provisioning execution environments (both test and production) for application and database serving. And you'll still get to connect these applications to their databases, develop code, and create safe copies of production data for application testing and release management. Transitioning to Cloud</description>
      </item>
      <item>
         <title>Blockchain: Let's Talk Business Case, Instead of Use Case</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/blockchain-let-s-talk-business-case-instead-of-use-case</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/blockchain-let-s-talk-business-case-instead-of-use-case</guid>
         <pubDate>February 28, 2018</pubDate>
         <description>When considering blockchain technology, it can be easy to get lost in the technical discussions. As should be the case with any new technology, particularly one like blockchain technology that is easily one of the most hyped in recent memory, you should focus on the real business cases. It is important to evaluate whether embracing blockchain is right for your business-will it live up to the hype, but more importantly, is there a real return on investment? One of the simpler and more digestible discussions on blockchain for the enterprise is by IBM. The audio presentation discusses how blockchain has the potential to transform hundreds of long-tail processes across all industry verticals. So, what is a long-tail process? Consider the example of refinancing your home, something I went through recently. The mortgage lender presented me with a big stack of paper with documents from the bank, the mortgage company, the insurance company and the state. It was a long, complex process! There's mortgage reinsurance, multiple parties, a lengthy timeline, and the transaction needs to be completed accurately and according to a specified timeline. While mortgage companies do a great job of advertising how easy it is to get lower rates, the process was anything but! The process to get to closing took almost two months! Just like mortgage companies and banks, many businesses and their suppliers/partners own multiple assets and multiple pieces of information, as well as rules and policies on risk parameters, that must all come together when conducting business transactions. When handled manually, all this adds to the time and the cost of processes. Regulations also slow down the process-no one wants to get sued and everyone wants to get paid! With blockchain, long-tail processes can occur automatically. And the technology further facilitates the process by ensuring the</description>
      </item>
      <item>
         <title>A Paradigm Shift for Modern Mainframe DevOps</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/a-paradigm-shift-for-modern-mainframe-devops</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/a-paradigm-shift-for-modern-mainframe-devops</guid>
         <pubDate>June 12, 2018</pubDate>
         <description>I am a firm believer that modern application developers are more than capable of becoming experts in any programming language. Whether you look at an exciting, modern language like Google Go, or a tried and tested language such as COBOL, programming is simply a concept. &quot;Programming is actually a set of patterns that helps us instruct a machine effectively to solve a real-world problem&quot; (Chatterjee, 2017). My own experience supports this idea, as I have programmed with languages such as IBM HL-ASM, C, C++, Java, JavaScript, Python, and more while working on mission critical software in the mainframe industry. Why do I point this out? The reason is simple. When we talk about the challenges surrounding mainframe software development, the problem is not the programming languages or lack of programming skills, but rather a lack of tools that interact with other computing platforms such as cloud, Linux, and Windows. The lack of tools means that we are unable to leverage the skillsets that modern software developers already possess. We hear about the shortage of mainframe-centric development skills on what seems like a daily basis, so perhaps it is time to examine a new approach to the problem. Instead of trying to bring new developers to the mainframe platform, let's change the paradigm of mainframe development and bring the mainframe to today's developers! Current Challenges to Mainframe DevOps The challenge of extending mainframe DevOps to reach the levels of automated delivery achieved on other platforms, such as cloud, is well documented. Here is an excellent article by Dave Nicollete about The State of Mainframe Continuous Delivery. The mainframe industry has fallen into a trap €“ we continue to develop mainframe-centric solutions for DevOps challenges. Vendors create solutions such as continuous integration (CI) or continuous delivery (CD) orchestrators, testing frameworks, IDEs, automated</description>
      </item>
      <item>
         <title>Five Benefits of Network Monitoring Software You Can’t Deny</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/five-benefits-of-network-monitoring-software-you-can-t-deny</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/five-benefits-of-network-monitoring-software-you-can-t-deny</guid>
         <pubDate>March 5, 2019</pubDate>
         <description>What's happening on your network? How does network performance impact your end user experience? Where is the performance bottleneck? What will happen if your users start using bandwidth-intensive applications? If you do not have answers to these questions, it's high time you enhance your network monitoring software capabilities. With applications becoming more complex, the demand for better availability and improved performance continues to grow. This makes network monitoring software one of the most important investments for modern data centers. Modern network monitoring keeps an eye on devices, traffic, and servers in real time and notifies network operations when performance begins to deteriorate. This rapid relay of information helps NetOps to readily identify the areas of concern and mitigate risks associated with unexpected events. Though network monitoring software provides a myriad benefits for your business, in this blogpost, we will discuss five benefits that you can derive out of your CA/Broadcom network monitoring tools: 1. Better visibility of network elements: &quot;With so many network elements, it is hard to keep track of which elements support which critical business services&quot; - this is a common challenge that most network administrators face today. With technological innovations and an increase in connected devices, today's IT networks are growing in size and complexity. Whether you're dealing with software-defined networks, cloud migrations or IPv6 transitions, you need reliable tools to help you monitor all your network assets and ensure smooth performance. CA and Broadcom's network monitoring software provides complete visibility into a complex ecosystem that helps network operations teams readily track the data moving across devices and fix issues faster. 2. Intuitive insights into infrastructure planning: Network monitoring software gives you historic reports with future predictions of how your infrastructure will perform. Analysis of historic data can help you determine if your current system landscape can</description>
      </item>
      <item>
         <title>DX NetOps Intent Based Networking for SD-WAN</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/dx-netops-intent-based-networking-for-sd-wan</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/dx-netops-intent-based-networking-for-sd-wan</guid>
         <pubDate>May 19, 2019</pubDate>
         <description>The complexity of Software-defined Networks (SDN) and the rate of change executed by controllers require a way to provision based on what a network should do and how it should behave. This approach is called Intent-based networking (IBN) and is fast replacing manual or even script driven device and network deployment and configuration. The ability to apply an intent for how a network should behave is the key to SDN and allows Enterprises to maximize performance balanced with cost. The most popular IBN based technology today is SD-WAN and in this blog we'll look at how DX NetOps makes sophisticated IBN not only possible, but easy. Network Engineers and Administrators intend to configure a network to act a certain way when delivering traffic. This is known as traffic-shaping and has been around for a long time in the form of QoS and similar technologies. Network Administrators have used QoS to &quot;intend to&quot; deliver optimal network resources for critical applications and services. Through QoS policies, differing traffic classes can take advantage of network resources at a higher priority than less-critical traffic. Less-critical traffic can be dropped when resource contention is tight. This ensures the high priority traffic is not impacted while crossing congested network links, but what about that lower priority traffic that's dropped? What type of overall user-experience of the network is being experienced and at what cost? How can these policies adapt to change in the network automatically? While QoS has provided some level of intelligent &quot;performance-based routing&quot;, SD-WAN technologies have taken this a step further. Network Engineers can now define networks with more sophisticated intentions than dropping lower priority traffic for higher priority traffic using static policies. Now SD-WAN can be configured to optimize the flow of application traffic over the WAN; not just based on performance, but</description>
      </item>
      <item>
         <title>Using AIOps for Context Generation and Alarm noise reduction</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/using-aiops-for-context-generation-and-alarm-noise-reduction</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/using-aiops-for-context-generation-and-alarm-noise-reduction</guid>
         <pubDate>July 19, 2018</pubDate>
         <description>The beauty of artificial intelligence lies in its power to enhance human intelligence with machine-like computation capacity. AIOps enabled solutions derive logics and decisions based on assessment of high quality data. While it has found its relevance in multiple arenas, one of the important applications of AI lies in context generation. As the data availability increases, it becomes imperative to help end users reduce noise from this data based on the context and use “relative noise reduction” to enhance efficiency for humans. CA’s Digital Operational Intelligence uses the powerful statistical methods combined with machine learning (ML) and provides the users with the right “microscope” set accurately to focus on their systems that not only predicts its behavior and helps in fixing the problems before they occur, but also provides contextual and noiseless vision into their systems. The disparate monitoring platforms in enterprises have natural flexibility to be conservative or liberal when it comes to alarming the user about possible situations/issues. Typically, alarms are raised based on metric values crossing certain threshold values and are a reason of concern. However, all the alarms are not necessarily indicative of a new incident and it is not a concern of an admin call at midnight! What we would want to have is the “incremental liberalism” in assessment of alarms to make the life of end user easier. This means, the end user needs to be exposed not to every alarm raised in systems due to conservative thresholds, but rather the broad issues that are present in the system. This is imperative not only for reducing the end user’s effort of sifting through multiple alarms but also allowing for efficient and quick root cause analysis. For CA Digital Operational Intelligence, we use machine learning-based modules to help the end user reduce the noise generated</description>
      </item>
      <item>
         <title>Where is my data!? Why GDPR is good for Mainframes</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/my-data-gdpr-good-for-mainframes</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/my-data-gdpr-good-for-mainframes</guid>
         <pubDate>August 30, 2017</pubDate>
         <description>An insider look at GDPR as a business enabler to enhance enterprise data privacy. There was a time where life was simple, and staying in complete control over all data in the business was a manageable task. But fast forward to today's application economy with IT infrastructures being more intertwined than ever before and GDPR compliance is now more complex than ever. Consider the Connected Mainframe for example, where organizations are integrating the mainframe with Linux, mobile applications, APIs, and Java to drive digital transformation and significant ROI (300 percent ROI to be precise), but the integrations result in data moving on and off the platform - and ending up in places not realized. Digital transformation meets data privacy with the European Union's new regulation, the General Data Protection Regulation (GDPR). GDPR compliance is required by any organization that processes personal data of EU citizens, and helps businesses adopt more standardized data protection policies and processes. GDPR compliance takes full effect in May 2018, and those that fail to comply face administrative fines up to €20,000,000 or up to 4 percent of global turnover, whichever is higher. But, many organizations aren't sure where all of their corporate data on the mainframe is located, whether it's being managed to policy, and the steps needed to get started on their GDPR compliance journey. Mainframe and GDPR - what's the connection? The implications for the mainframe and GDPR are vast. The increased use of mobile devices alone are driving exponential growth in transaction volumes, and that data contains massive amounts of PII. This personal data is spread across the organization, widely used, transformed and accessed in different ways by different people, meaning application-based controls are not enough for complying with the regulation. The key first step toward achieving GDPR compliance for mainframe data</description>
      </item>
      <item>
         <title>Just What Is DevSecOps?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/just-what-is-devsecops</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/just-what-is-devsecops</guid>
         <pubDate>June 20, 2018</pubDate>
         <description>&quot;No such thing as bad publicity&quot;? While that may have been true once, it's no longer the case in today's digital era. We all want to make the headlines, but only for the right reasons. There are countless companies out there striving to become press darlings and be dubbed &quot;the Airbnb of fill-in-the-blank&quot; or &quot;the next Uber.&quot; But for enterprises taking tentative steps toward digital transformation, it's more important to prevent getting in the headlines for the wrong reasons-their security. The Security Challenge Security breaches and leaks are seemingly becoming more and more commonplace. Spend a minute googling &quot;data+breach&quot; and flick through the top stories listed-chances are good that there's been at least one incident reported today, another firm falling afoul of data security protocols and finding itself in hot water. Over the last few years, companies of all sizes have experienced major data leaks, and most worryingly, the size and scope of each new breach seems to outstrip the last. Moreover, there are some distressing statistics to be found. Approximately 62% of all cyber-attacks target smaller businesses, and according to Insurance Business Magazine, more than 31% of small businesses are &quot;unable to sustain their operations for more than a week&quot; after being hit by a cyber-attack. Clearly, security threats are increasing and it's becoming a challenge to keep up. DevSecOps may be the key to achieving just that. For those start-ups looking to become the next big thing, it seems the odds may just be stacked against them, unless they're one of the increasing number of organizations adopting a DevSecOps mindset. DevSecOps and Why It Matters The basic principles of Developer-Security-Operations (DevSecOps) couldn’t be clearer and are built upon the idea that throughout the software development life cycle, everyone is responsible for security. While this may seem like an</description>
      </item>
      <item>
         <title>Are You Prepared for Disaster (Recovery)?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/are-you-prepared-for-disaster-recovery</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/are-you-prepared-for-disaster-recovery</guid>
         <pubDate>April 16, 2018</pubDate>
         <description>Most companies cannot efficiently execute their disaster recovery plans. Surprised? Nervous? If you can see your company in these murky waters, you should be... There's no escaping how essential IT has become to modern business; gone are the days where corporate life could continue without its IT systems. These days, across all industry sectors, critical business processes rely upon IT, and yet we're still being met by what feels like an age-old conundrum: what awaits us in the face of a disaster? Just to be clear, when I say 'disaster', I'm not talking about a situation which can be solved by the likes of high availability. In this case, disaster means a 'marginally-short-of-an-apocalypse' scale incident. Simply load-balancing servers isn't going to solve this situation. Before you get too comfortable and tell yourself, &quot;that sort of thing will never happen to me&quot;, I want to run some figures past you quickly, which have been compiled by IDG Group: 42% of surveyed companies experienced a €˜catastrophe-marginally-short-of-an-apocalypse' event in the last year alone 65% of firms are still relying on manual, human factors in their disaster recovery plans 72% of companies only test their disaster recovery plan (DRP) once a year So, how confident are you now that you won’t experience the same sort of issue? There’s no accounting for the ‘when’ or ‘why’ once disaster strikes. It could be caused by human error, industrial sabotage, technological failure, or even an act of god—the root of the problem at this stage is irrelevant. As you know, businesses rely on IT implicitly, from communications to point of sale. In the immediate aftermath, the cause of the disaster is a secondary concern; your number one objective is simply how quickly you can get your systems back online and return to normal operations. What you need</description>
      </item>
      <item>
         <title>Growing Adoption Of Your Public Cloud The Right Way - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/growing-adoption-of-your-public-cloud-the-right-way-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/growing-adoption-of-your-public-cloud-the-right-way-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>December 21, 2017</pubDate>
         <description>Public cloud is here and growing in majority of the enterprise companies. But the adoption seems to be ad-hoc in nature. Typically, in an organization the push to the cloud comes initially from siloed development groups. Eventually the broader organizations begins to see the benefits and adoption grows. To truly maximize the benefits of cloud (or multiple clouds) and successful adoption, the right strategy needs to be in place.

 

Watch this short video with renowned industry thought leader David Linthicum as he talks about the right way to adopt public cloud from his experience with some of the leading enterprise organizations. He highlights security, monitoring, governance and skill sets as some of the key aspects to consider while formulating a holistic cloud adoption strategy.

 

To get upcoming videos with David please sign up here. At CA we are continuously adding capabilities for monitoring and managing public cloud based infrastructures. Don't take my word for it, try out yourself.
</description>
      </item>
      <item>
         <title>Short Story: How an MRI from CA Changed Mainframe Mike's World</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/short-story-how-an-mri-from-ca-changed-mainframe-mike-s-world</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/short-story-how-an-mri-from-ca-changed-mainframe-mike-s-world</guid>
         <pubDate>May 17, 2018</pubDate>
         <description>A chance meeting with an old friend shows Mainframe Mike how to prove the relevance and value of the Mainframe in a modern digital enterprise. Read the story to find out what makes Mainframe Resource Intelligence (MRI) from CA a genuine Mainframe game-changer. Commuting once again to his job as the Head of Mainframe Operations in a large enterprise, Mainframe Mike reflected on the day ahead. He took a deep breath. There would be more battling against the impact of the CIO's cost cuts. And more skepticism from his Enterprise Architect colleagues about the role of the Mainframe in the company's digital transformation. Crawling through the rush hour traffic, Mike thought about how the Mainframe was misunderstood. These guys didn't seem to understand the Mainframe or its value, and were convinced their cloud-first strategies would deliver speed and innovation at lower prices than Mainframe services. Then there was the constant burden of proof to show that cost optimization and digital transformation really were possible with the Mainframe. True, there were challenges proving the economics of Mainframe, a shrinking skills base, and the constant balancing act between time to innovate and keeping the lights on. 'Time for a coffee' thought Mike, and pulled over at the next junction. Queuing to get served, Mike felt a tap on his shoulder. &quot;Mainframe Mike&quot; said a voice? &quot;Big Iron Bob&quot; Mike exclaimed. Standing before him was one of his oldest friends from college, who Mike hadn't seen for years. &quot;Still working in Mainframes?&quot; asked Mike. &quot;Sure am&quot; said Bob. &quot;Had a tough patch for a while but right now things couldn't be better. For example – our Mainframe applications are now part of an enterprise-wide DevOps initiative and finally, with GDPR around the corner, the enterprise security guys are asking the right questions about</description>
      </item>
      <item>
         <title>DevOps Orchestration with CA Automic Release Automation v12.1</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/devops-orchestration-with-ca-automic-release-automation-v12-1</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/devops-orchestration-with-ca-automic-release-automation-v12-1</guid>
         <pubDate>November 5, 2017</pubDate>
         <description>The continuous delivery pipeline can be visualized as a factory. What do factories require? DevOps Orchestration. DevOps orchestration ensures an organization’s development, physical environments and processes are capable of delivering new builds into production as rapidly as possible. The continuous delivery pipeline can be pictured as a factory and, like a factory, there is a certain level of specialization required for the various tasks that must be accomplished. However, while individual tasks or steps can be automated, the factory’s end-to-end process – its output – is obviously the most important from a business perspective. Similarly, in DevOps, we have many specialized lifecycle tasks that are distinct from one another, but the most important measurement is the end release. Firstly, there’s design and development; next there’s testing (across multiple levels); then comes production monitoring and round she goes. While the stages are generally the same across teams and organizations, the specific requirements and preferences of said teams/organizations lead to a market place filled with thousands of tools for accomplishing generally similar tasks. DevOps Orchestration: Beyond Standardization In the days predating agile, the principle of standardizing was popularized by CIOs and vendors. The idea that a company can greatly benefit from standardizing tools and practices across all its infrastructure and business applications made a lot of sense in an age when agility wasn’t the most critical competitive advantage. Nowadays it’s completely different. As organizations compete in an ever-evolving and ever-improving world of customer experience, standardization has subsided. In its place, the enablement of teams has risen, granting the freedom to use any tools they deem fit for purpose. Results matter. With an ever-growing set of technologies and tools, the challenge of automating an end-to-end process within a software factory becomes ever more critical. While enabling the separate teams to use their preferred</description>
      </item>
      <item>
         <title>Predictive Analytics and Machine Learning</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/predictive-analytics-and-machine-learning</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/predictive-analytics-and-machine-learning</guid>
         <pubDate>April 18, 2017</pubDate>
         <description>It's not long ago that AI was a semi-mystical concept that belonged in science fiction films. But although AI and machine learning are moving into the mainstream, many IT leaders see machine learning for IT operations as an interesting but complex topic. Doesn't machine learning mean lengthy engagements with expensive data scientists? In a word, no. You can apply machine learning to mainframe data to help you make IT Operations decisions that are better for your business. And doing it's a lot more straightforward than you might think. AI 101 First, let's pin down a few key terms. Forrester draws a helpful distinction between &quot;pure AI&quot; that strives to mimic human intelligence and &quot;pragmatic AI&quot; that applies a moderate level of intelligence to applications. Machine learning is an example of pragmatic AI in action. In machine learning, algorithms analyze data to find models that can predict outcomes or understand context with significant accuracy. Learn and predict So how does machine learning enable predictive analytics? These intelligent algorithms learn patterns in data and use what they learn to predict similar patterns in new data. Machine learning automates this process, equipping computers to make progressively more accurate predictions. In real time. Without human intervention. Intelligence in action Let's take a real-world example of machine learning in action: a smart thermostat. With a regular thermostat, you simply turn it up and down as the temperature and seasons change. But with a smart version that incorporates machine learning, your thermostat gets to know your preferences based on your behavior and applies them autonomously. This learning process is based on patterns of behavior rather than one-off instances. Your smart thermostat will learn your preferred settings for different times of day across different rooms, based on whether it's a weekend or a weekday, and predict the</description>
      </item>
      <item>
         <title>Is Batch Dead Again?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/is-batch-dead-again</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/is-batch-dead-again</guid>
         <pubDate>April 17, 2019</pubDate>
         <description>It is probably not the first time you hear that batch processing is dead. The exponential growth of computing power has indeed made interactive processing preeminent. However, most IT organizations still dedicate a significant share of their budgets to automate repeatable tasks and routine processes. And now, we are quickly heading towards a scenario where automation tools are capable of thinking for themselves and making decisions based on policy; this is what is becoming known as intelligent automation. Intelligent automation opens the way for undertaking more complex IT tasks autonomously, applying better awareness and understanding of underlying business data that can be acted on. Thriving in increasing complexity It is not a big secret that digital transformation has made business processes get bigger and more complex. As a consequence, it becomes increasingly difficult to see in real time how assumptions in one area may impact one another. That’s why it is vital that modern IT automation steps up and outperforms basic job schedulers and script runners. What’s needed is automation that can better understand the underlying business context and enable proactive business process management. The main capability you should expect from intelligent automation is to sense large amounts of business data for driving existing processes or workflows, learning and adapting dynamically. It can help you avoid to rip-and-replace your infrastructures and applications while actually improving your service delivery. So, no need for a big-bang. The implications of embracing an intelligent automation are improved processes and faster response times at a fraction of the cost. Coping with modern architectures Yet another effect of digital transformation, the mobility of workloads is accelerating as we move into a new world of clouds, containers and serverless architectures. This trend just creates more technical silos, adding layers of systems management and increasing the potential for</description>
      </item>
      <item>
         <title>Extreme IT Automation</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/extreme-it-automation</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/extreme-it-automation</guid>
         <pubDate>July 22, 2018</pubDate>
         <description>Following the Extreme IT Automation webinar on DevOps.com, I wanted to provide a summary and explore some of the topics discussed. You can watch the full webinar here. How DevOps Happens DevOps transformed the software development process. It facilitates continuous delivery-that is to say faster and more efficient releases without a corresponding increase in operational risk. But DevOps is itself predicated on a number of pillars: culture, lean process design, measurement, sharing and automation. Automation is the practical element of DevOps and enjoys a symbiotic relationship with the other pillars. While it might be possible to embrace a DevOps culture, the effects and benefits of doing only this to the exclusion of the other four pillars are minimal. This culture can only thrive when it is built upon a strong technical foundation. Indeed, the larger, more complex and heterogenous an enterprise is, the more significant the role automation plays. As the heartbeat of an organization successfully doing DevOps, automation streamlines development and deployment, creating time for staff to work on the innovative projects that will deliver your company's critical differentiators. In other words, it forms the central tenet of your digital transformation. Getting It Right There is no hard and fast rule on the use of automation, and the requirements, use cases and scale of its implementation vary from company to company. Nonetheless, there are general guidelines that can be used to shape your approach. The fundamental idea is to remove manual processes that are tedious, time-consuming and liable to involve human error. Moreover, this gives rise to the possibility of undertaking projects that would not have been possible before. For instance, without automation, a task that might involve completing the same action hundreds of times a day was simply not possible due to a lack of hours in the</description>
      </item>
      <item>
         <title>What's the Point of Big Data Without the Insight?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/what-s-the-point-of-big-data-without-the-insight</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/what-s-the-point-of-big-data-without-the-insight</guid>
         <pubDate>April 19, 2018</pubDate>
         <description>Strategic for business but overwhelming for IT-how do you manage Big Data? The Big Bang Theory is the pre-eminent explanation for the origins of the universe. It proposes, from an explosion of atoms, the universe rapidly expanded and in doing so created everything we know. But it did so in an erratic, haphazard and unstructured way. For every life form, an infinite number of inhospitable desolate galaxies, planets and black holes were also created. Big Data is often associated with the birth of what is known as the Third Generation of IT and many organizations are now at a crossroads: Do they continue down the current path, hoping to stumble upon a planet Earth as they navigate a sprawling universe of information and data that continually expands in a totally unstructured fashion? Or do they apply a longer-term strategy, investing now to refine their technology and capitalize on the explosion of information available? Power is Nothing Without Control If you're using big data, presumably you're doing so to enable and enhance your digital transformation; big data utilization strategies are not being implemented by most companies to consume time and hinder agility. Yet, too often, this is exactly what happens! When something powerful emerges, controlling and harnessing it requires either great ingenuity or great strength. Throughout history, from the wheel to the Internet, humans have always adapted, evolved and developed new tools to overcome the challenges they have faced. And the technical world is no different, with big data revolutionizing the way we work and understand human behavior. Hadoop is perceived as an especially powerful agile technology that can help both data scientists and IT departments integrate big data applications and processes into your IT ecosystem. While this could transform your company, attempting to control it can be nightmarish, devouring time</description>
      </item>
      <item>
         <title>11 Steps to Having Difficult Conversations, Successfully</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/11-steps-to-having-difficult-conversations-successfully-rally-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/11-steps-to-having-difficult-conversations-successfully-rally-software</guid>
         <pubDate>April 12, 2018</pubDate>
         <description>Every day in the hyper-collaborative R&amp;D organization for Rally, we face difficult conversations with opposing opinions, high emotions and high stakes. Every time they occur we are faced with the opportunity to avoid the situation or step into it. However difficult, the best thing to do is to tackle the conversation in a way that will create better outcomes for you, your team, the product, and ultimately, the customer. But stepping in isn't always enough. With high emotions our bodies often react in fear, causing us to come out defensive or angry, creating an unsafe environment that is not conducive to successful outcomes. To combat our fight or flight instincts, we decided it was important to have a common framework for success we could practice across our organization. We hope by sharing it with you, that you will bring it into your organization, and maybe share some of your tips and tricks with us. 1. Identify issues and write them down. Don't script your introduction or discussion, but jot down some notes about what is really bothering you. If you write down issues vaguely like &quot;you're always late&quot; or &quot;you never follow the schedule&quot;, the other party will immediately jump on the defensive with examples of every time your statement was false. Instead, write down how it makes you feel and how it affects you or the team. Example: A developer is regularly 20 minutes late. Does it bother you because you feel your time is not valued? Because your commitments are being compromised? Because you feel they are not being held to the same standard as you? Be as specific as you can. 2. Ask yourself several questions. What is the purpose of this conversation? What do you hope to accomplish? What is your ideal outcome? The conversation is</description>
      </item>
      <item>
         <title>A Smart Approach to Doubling Down</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/smart-approach-doubling-down</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/smart-approach-doubling-down</guid>
         <pubDate>January 11, 2019</pubDate>
         <description>As someone who has been in the mainframe business for years, I've heard countless worries about the future of the mainframe and its structural compatibility with modern innovation. Innovation is vital to a business that wants to stay competitive and continue to deliver an exceptional customer experience. So, it begs the question: how can we continue to innovate effectively while maintaining mainframe as the vital backbone of your business? Since CA's acquisition by Broadcom in November, we've taken time to look to the future to realize what we can achieve in our new organization. Greg Lotko explained some of the exciting changes coming as we look to the future in his latest mainframe.ai blog. Vikas Sinha's blog described our strategy for helping our customers succeed. Here, you will find how we are looking to the future as it relates to each of our product areas. Quite simply, by using a smart, balanced approach. By maintaining the core of who we are while investing for the future. We know that change is hard. The law of inertia tells us just that. It isn't a function of nature for an object's motion to increase unless it is acted upon by another force. The same can be said for motion decreasing, too. So what is the inertia that tends to hold mainframe at a standstill? First, it's difficult to make major changes to a platform that is crucial to your business 24-7. Just like performing surgery on a beating heart, making major changes to the mainframe is a massive, careful undertaking. Second, with almost half of mainframers today being over 50 and just 7% being under 30, the mainframe workforce is aging. Third, mainframe technology lasts a long time -- and that's a good thing! -- however, it lacks modern interfaces that newer</description>
      </item>
      <item>
         <title>Building Digital Trust in a Digital Economy</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/building-digital-trust-in-a-digital-economy</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/building-digital-trust-in-a-digital-economy</guid>
         <pubDate>October 26, 2017</pubDate>
         <description>In this final stretch leading up to CA World I am reminded of how great leaders provide stability during periods of great stress. For the digital economy, I've observed a similar need for establishing a System of Trust that addresses the fear and uncertainty emblematic of incidents such as the Equifax breach. In the last two months I've shared why we in the vendor community believe the new IBM z is a game changer, and CA Technologies' support for this incredible advancement. As part 3 of this series, I'd like to explain how digital trust has then emerged as a defining strategy for the digital enterprise. Our customers in the digital economy know well how innovation can be a double-edged sword. Although mobile devices, cloud environments, social media, and the Internet of Things (IoT) have brought the opportunity for them to augment and grow, they have also surfaced new threats in the form of cyberattacks and data breaches. Navigating this uncertainty requires a level of digital trust so that you can take risks with confidence in this complex and swiftly-evolving digital economy? In fact, when was the last time you asked: Will the next online transaction your company processes result in customer satisfaction €“ or create trust concerns around identity theft? Will you be able to verify that the customer involved in this transaction is real and not a bot pretending to be a user? Will your digital applications and services be always on anytime and anywhere to win the trust of your customers? At the Gartner Symposium ITXPO I delivered a customer insight-packed presentation covering the challenges businesses face on the digital frontier and how those challenges impact the three main aspects of digital trust. Verify People: With the cost of identity fraud rising to $16B, verifying and securing</description>
      </item>
      <item>
         <title>Don't Underestimate the Mainframe Database and Its Role in Your Modernization Strategy</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/don-t-underestimate-the-mainframe-database-and-its-role-in-your-modernization-strategy</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/don-t-underestimate-the-mainframe-database-and-its-role-in-your-modernization-strategy</guid>
         <pubDate>July 10, 2018</pubDate>
         <description>IT leaders across the globe are addressing changing market dynamics by seeking opportunities to connect their data to cloud and micro services without disrupting their existing data management infrastructure. The big question of the moment is: &quot;Can you continue to use a mainframe platform and Database Management System (DBMS) as the system of record for all mission-critical business activities?&quot; The mainframe DBMS has delivered tremendous value for years, providing unprecedented reliability, uptime, and accessibility, and enabling reuse for existing applications, business logic, and data to be exposed and consumed through APIs. Yet the term &quot;mainframe&quot; is often falsely associated with the idea of &quot;legacy&quot; technology. Nothing could be farther from the truth! The mainframe DBMS is a mature, evolving, dynamic data store for the data of today and tomorrow. Investments in mainframe modernization and the simplification of mainframe DBMS products are delivering new capabilities at an accelerated rate, and keeping the mainframe database at the heart of the modern mainframe software factory. DBMSs such as CA Datacom® and CA IDMS™ remain the systems of record for government, insurance, financial, and retail organizations, among others. Successful strategies on mainframe database platforms bring modernized applications to the mainframe and achieve game-changing business outcomes. That being said, what does present and future data management on the mainframe look like? The Explosion of Mainframe DBMS Use Cases Data evolves over time, as does data usage. Twenty years ago, data was simply part of an application process. Today, it is a living, breathing part of the digital ecosystem. Accordingly, there has been explosive growth in both data access and captured data €“ and the mainframe database has been notable for keeping pace with increasing access, volume and with the new variety of data types, including biometric images such as fingerprints and retinal scans. Consider these</description>
      </item>
      <item>
         <title>The Transformed and Transformative Platform of the Future</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/the-transformed-and-transformative-platform-of-the-future</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/the-transformed-and-transformative-platform-of-the-future</guid>
         <pubDate>May 4, 2018</pubDate>
         <description>I recently had the pleasure of sitting down with Denny Yost, Editor-in Chief at Enterprise Executive magazine, to discuss the state of the Mainframe. Mainframe is woven into the fabric of today's digital economy. It provides unprecedented agility, speed, and security from deep automation to next-gen developer toolsets to support businesses throughout their digital transformation. While there is talk of being 100% &quot;in the cloud,&quot; the better question for businesses to ask is &quot;What is the right mix of technologies to achieve my desired business outcomes and succeed in the digital world?&quot; The mainframe, with 70% of corporate data residing on the platform, is a key part of the answer. For CA Technologies, success in the mainframe arena means keeping an open dialogue with ourLet’s think practically customers, leveraging feedback to improve our products, and constantly innovating for a better business world. We are committed to empowering our customers with the right knowledge and software to meet and exceed their business needs. Let's think practically. Right now, IT executives are facing a number of challenges in data management, containment, and security. In my interview with Denny Yost, I discuss the issue of balancing the desire to optimize environments for maximum efficiency and productivity with the reality of skills attrition, tight budgets, lack of resources, and data privacy. Then there's the question of what's coming down the road. That is the really juicy stuff. The mainframe space is dynamic, and has done well in keeping up with the times; at CA, we are expanding and developing the adoption of agile methodologies for continuous delivery and building out tools to help with &quot;traditional&quot; and &quot;next-gen&quot; mainframer collaboration, among other things. I invite you to read the full Enterprise Executive article here, as well as my debut column, Digital Enterprise at Scale, which</description>
      </item>
      <item>
         <title>5 Ways Automation Can Increase the Value of JD Edwards EnterpriseOne</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/5-ways-automation-can-increase-the-value-of-jd-edwards-enterpriseone</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/5-ways-automation-can-increase-the-value-of-jd-edwards-enterpriseone</guid>
         <pubDate>August 30, 2018</pubDate>
         <description>How enterprise automation can increase your return on investment and reduce your total cost of ownership. When you implemented JD Edwards EnterpriseOne, your goal was to simplify recurring cumbersome business processes. To this end, you probably also purchased one or a number of JD Edwards EnterpriseOne solutions-for example, Customer Relationship Management, Financial Management, Supply Chain Management, and/or Business Intelligence. Your efficiency jumped immediately. However, it's likely you quickly became aware of areas for improvement, including the need to reduce your batch window, control business processes more effectively and distribute output more efficiently. This (and more) can only be achieved with enterprise automation. In this article, we'll explore five ways enterprise automation can enhance your JD Edwards EnterpriseOne return on investment (ROI) and reduce your total cost of ownership (TCO). 1. Nightly Report Processing If you're running simple job streams with limited numbers of reports, the JD Edwards EnterpriseOne scheduler is generally satisfactory. However, if you're running complex job streams with large amounts of data, the processing time can extend well into the nightly batch window. The lack of automation functionality can also demand complex scripted workarounds. Enterprise automation shortens that batch window. Take the example of job streams, where some reports must run sequentially, but other jobs can run in parallel. Using enterprise automation, you can model the process flow, with strict dependencies between reports running sequentially or in parallel. You can also report failure blocking. If a report ends in error, the job can block all following jobs in the process flow from executing and alert an operator. You can then process the flow from the point of failure, rather than rerunning the entire job stream. The on-board scheduler for JD Edwards is also inefficient; when given many jobs for execution at the same time, it is slower to</description>
      </item>
      <item>
         <title>Enable Legacy Agility and Futureproof Your Technology Stack</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/enable-legacy-agility-and-futureproof-your-technology-stack</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/enable-legacy-agility-and-futureproof-your-technology-stack</guid>
         <pubDate>October 31, 2018</pubDate>
         <description>Technology moves at such a fast pace that it can be easy to forget what came before it; the temptation is to continually move in with the new and out with the old. We are constantly looking forward, even beyond the technology we are currently using, hoping to see the latest breakthrough in machine learning or wondering what the implications of big data will be. However, speculation and curiosity offer no certainty as to what is about to come and, if we are not careful, can actually distract us from the technology of the here and now. While there is no harm in looking forward, we must consider the future in relation to the present and the past. History shows that while a great many apps will be rewritten to leverage new architectures and other apps will be modernized to some extent, substantial chunks of the portfolio will remain with legacy and hybrid architectures for quite some time. This results in a multi-modal environment, which will be extant for the foreseeable future. Without looking at the picture holistically, we immediately run into a number of issues. Failing to address the past means we fail to bring agility to legacy products, and you're only as quick as your slowest-moving part. Likewise, if we focus purely on the past, we can fail to futureproof our products and applications for new technical innovation. By addressing all parts of IT infrastructure together, we prevent different departments from becoming siloed or isolated, which would in turn lead to fragmented, slow and inconsistent processes. However, simply ripping and replacing our systems, or homogenizing them, is not an option for a number of reasons: the complexity of modern enterprises, the importance of an array of technologies and the cost of replacing entire systems laden with valuable data.</description>
      </item>
      <item>
         <title>Mainframe and the Inevitable Attack of the Internet of Things</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/mainframe-inevitable-attack-internet-things</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/mainframe-inevitable-attack-internet-things</guid>
         <pubDate>December 12, 2016</pubDate>
         <description>The scramble to profit from the Internet of Things increases risk to enterprise mainframes. I first wrote about the risks that the Internet of Things (IoT) portended back in 2014. Even then, it was obvious the rush to commercialize the new technology was leaving gaps that thieves and cyber terrorists would inevitably exploit. As organizations scramble to profit from new technology, the risks to enterprise mainframes increases, which requires new safeguards against them. The IoT is a technology that will bring additional value, convenience, and productivity -- there's no doubt about that. Unfortunately, we will experience periods when the security of IoT devices is going to expose enterprises and individuals to very negative outcomes. What does mainframe have to do with IoT? As a data security professional, I follow Brian Krebs, an American journalist and investigative reporter known for his coverage of profit-seeking cybercriminals. The excellence of his reporting has made him one of the premier voices in the data security space, and his outspokenness has made him one of the notable targets for those who want to demonstrate their technical or cybercriminal expertise. His site, Krebs on Security, suffered one of the largest distributed denial of service attack (DDoS), where access to his site was drowned by as much as 665 Mbps. The most interesting element, however, was the origination points of that deluge of data. As Krebs himself notes, &quot;The huge assault this week€¦appears to have been launched almost exclusively by a very large bot net of hacked devices.&quot; The second attack of similar nature impacted a far broader audience than data security professionals seeking to read a blog. It affected consumers of a wide variety of online services, including Netflix, Spotify, Twitter, Pinterest, CNN, Tumbler, Reddit, and more. The attack impacted these services across a broad swath</description>
      </item>
      <item>
         <title>Achieving DevOps for Hybrid IT with CA Endevor SCM</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/achieving-devops-for-hybrid-it-with-ca-endevor-scm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/achieving-devops-for-hybrid-it-with-ca-endevor-scm</guid>
         <pubDate>November 1, 2018</pubDate>
         <description>As we approach the year 2020, mainframe remains the platform of choice for processing large workloads, and despite what people may have thought 30 years ago, the mainframe shows no signs of going away. The world's largest companies continue to rely on its secure, proven and unparalleled transaction processing capability. Yes – there are new interfaces and systems of engagement, leveraging mobile and web platforms – but the core of critical business applications continues to be mainframe which means, despite the natural attrition of expertise, development on the platform must go on. However, attracting the next generation of developers requires evolution. Just like the end users of the business application systems we develop prefer to use modern interfaces like web and mobile to interact with the mainframe, so to do developers. The most successful teams have always used automation wherever possible to meet their demands, but to be effective today’s developers want more. They want rich and visual tools, from graphical IDEs providing language-specific productivity boosts like content assist and automated refactoring, to DevOps pipeline management tools that show at-a-glance where their changes are in the lifecycle while orchestrating specialized testing tools along the way. The Agile development methodology has required changes as well – for example to embrace incremental delivery, today’s developer leverages advanced branch and merge techniques to ensure they don’t create unnecessary dependencies between changes. In the past, the adage “if it ain’t broke, don’t fix it” has been the rule, with the preference being stability over change. That is no longer the case though – the enterprise is seeing good return on investment with transformation and they want to bring mainframe into the fold, as well as enable new developers to be productive on the platform. Change has become inevitable, but the big question is “how</description>
      </item>
      <item>
         <title>Dynamic Businesses Require Dynamic Inventory; Are Your Network Tools Ready?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/dynamic-businesses-require-dynamic-inventory-are-your-network-tools-ready</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/dynamic-businesses-require-dynamic-inventory-are-your-network-tools-ready</guid>
         <pubDate>February 24, 2018</pubDate>
         <description>One of the major impacts of the dynamic nature of DevOps and the software-based economy is how network services are delivered and assured by network tools. Not long ago, whenever there was a need for new applications or systems; the applications, infrastructure and network monitoring teams would sit together and create an elaborate plan for the project. Network engineers would provide a blue print for the network changes and connectivity requirements, an estimate for the new load on the network, and an update to the traffic policy. All the planned changes would go through manual testing cycles before the physical effort would begin to get the new services into production. This is all about to change. Welcome your network tools to the era of SDN Modern network technologies help eliminate a lot of the manual intervention for deploying new applications and infrastructures. With software-defined infrastructure, NetOps teams can provision applications and infrastructure automatically based on a repeatable set of policies. Depending on how the policies are defined, network and L4-L7 services can be deployed with little or no manual intervention. While flexibility and automation have many benefits, software-defined networking (SDN), software-defined data centers (SDDC) software-defined WAN (SD-WAN) and network functions virtualization (NFV) can increase the complexity for NetOps and the network tools that they use. As new applications and infrastructure are deployed, this dynamic inventory can create blind spots that weren't anticipated or outlined in an elaborate blue print. NetOps may only become aware of the new apps or infrastructure when an issue occurs that impacts the user experience. Figure: CA Spectrum dynamic inventory and topology mapping for Cisco ACI architectures[/caption] As such, network asset discovery, topology mapping and network monitoring software must also be automated and dynamic. Integration with the orchestration layers for SDN, SDDC, SD-WAN and NFV is</description>
      </item>
      <item>
         <title>Mainframe Automation: Continuous Delivery Isn't Just for Kids Anymore</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/mainframe-automation-continuous-delivery-isn-t-just-for-kids-anymore</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/mainframe-automation-continuous-delivery-isn-t-just-for-kids-anymore</guid>
         <pubDate>November 22, 2017</pubDate>
         <description>The mainframe has long been declared €˜dead', yet it remains a pivotal part of the modern enterprise. The mainframe is dead! Long live mainframe automation! For decades, people have been discussing the €˜demise' of the IBM mainframe. Truth be told, it continues to be at the heart of many of the world's largest financial institutions, insurance companies, healthcare organizations and retail businesses. Some even estimate its usage is growing €“ did you know: IBM estimates that 1.3 million CICS transactions are executed every second? Over 220 billion lines of Cobol code run these mission critical systems? Does that sound &quot;dead&quot; to you? Rather, there's more pressure than ever to integrate these &quot;systems of record&quot; with modern &quot;systems of engagement&quot;. The digital revolution is thriving: the Modern Software Factory is a reality and mainframe can play a crucial role in the success of this phenomenon. Modernizing with mainframe automation Agile development and continuous delivery are all the rage today, enabling the delivery of highly innovative applications to market at a faster pace and with greater quality than ever before. For many businesses, this revolution must include mainframe automation components, as there is often a tight relationship between the &quot;systems of record&quot; and customer facing applications on the web. No company understands this better than CA Technologies. We've been providing mainframe technology to enterprise businesses for over 40 years. Many of these customers are also highly engaged in implementing CA Automic Release Automation and other continuous delivery solutions to streamline and speed up their application release process. Now, with CA Automic Release Automation v12.1 they can integrate their mainframe application release processes with everything else. We've even provided integration to CA Endevor, the leading mainframe software change management solution. The CA Automic Release Automation integration for IBM Z optimizes the delivery and</description>
      </item>
      <item>
         <title>ERP Automation That is Built for Change</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/erp-automation-that-is-built-for-change</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/erp-automation-that-is-built-for-change</guid>
         <pubDate>January 31, 2018</pubDate>
         <description>An enterprise-wide, systemic approach to ERP automation provides the agility and visibility to thrive in the application economy. An enterprise resource planning (ERP) system is like the central heating system in your house. In the same way an ERP platform powers everything from finance to production, so the central heating powers the essential hot water and heating energy. It’s hard to imagine life without either of them. Here’s the ‘but’. Central heating systems are often as old as the house they power. They don’t operate very efficiently. They cost money to maintain. And the ERP platform? That’s typically much the same: large, monolithic, slow to adapt to change, expensive, and – most importantly – on-premises. Like the home heating system, businesses cannot afford the cost or risk involved in ‘ripping and replacing’ these ERP systems, including SAP, Oracle, and others. Instead, they need to maximize the return on investment from their ERP platform and ensure it is sufficiently flexible to enable modern business practices. That means extending ERP processes beyond the line of business silo – often to cloud-based solutions; introducing new functionality to satisfy customer demand quickly without impacting business as usual operations; or connecting ERP information to data warehouse and big data initiatives to facilitate timely analysis and reporting. The financial close process, for example, doesn't exist on its own. Data needs to be pulled from all corners of the business before it reaches the ERP. This information can be held in multiple applications or databases. Systematic, enterprise-wide ERP automation To overcome these challenges and support core processes within an ERP platform, enterprises typically adopt process automation. However, this is frequently approached in an opportunistic way – with different point automation tools deployed as and when needed. Manual effort may be reduced – but at the cost of</description>
      </item>
      <item>
         <title>The State of Automation: EMA Research Explores the Current Outlook</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/the-state-of-automation-ema-research-explores-the-current-outlook</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/the-state-of-automation-ema-research-explores-the-current-outlook</guid>
         <pubDate>October 24, 2018</pubDate>
         <description>Why Has Automation Become Critical to the Enterprise? Today, CA Technologies will launch The State of Automation report, in association with Enterprise Management Associates (EMA). The report suggests automation is becoming the backbone for modern businesses, and those that do not adopt it will flounder and struggle to survive. The report details how business strategies have been radically affected by automation. Those embracing it have already seen significant productivity gains and revenue growth. 49% already report having used it to generate new revenue opportunities. However, as the report digs deeper, the reality is that many companies have only a cursory understanding and lack the automation maturity required to truly exploit its potential. This is not to say they do not recognize its importance, but that they simply are not making full use of it: 51% of organizations see automation as key to delivering rapid rate of change and 98% have plans afoot to adopt it across their businesses. How Will This Play Out Moving Forward? With the advent of artificial intelligence and machine learning, automation capabilities are expected to enhance, and their value to the business to increase. Indeed, 98% believe AI and machine learning will improve automation, and 65% anticipate these advances will enable them to leverage data for better decision making and increase overall knowledge through better usage of historical data. Thus, moving forward, we can likely expect intelligent automation to become increasingly pivotal within the enterprise portfolio. The report does state however, that any driving force behind an automation strategy will be from the top down, led by CIOs and business executives. That said, staff on the ground will likely receive significant benefits, being freed up to work on strategic and innovative tasks-and the biggest benefactors will be those in finance and accounting. Competing Approaches The new</description>
      </item>
      <item>
         <title>GDPR is Alive. Now What?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/gdpr-is-alive-now-what</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/gdpr-is-alive-now-what</guid>
         <pubDate>May 28, 2018</pubDate>
         <description>To quote Mary Shelley's Frankenstein, &quot;It's alive, it's moving, it's alive...&quot; The General Data Protection Regulation (GDPR) went into effect on Friday. The EU adopted the regulation nearly 2 years ago with the intention of replacing the 1995 Data Protection Directive1. As of last week, all Member States and affected businesses must comply with the requirements of the legislation. This includes any business, in any country, that collects and maintains the personal data of EU citizens. The &quot;bare minimum&quot; approach to mainframe data security is no longer feasible. Enterprises cannot survive under the reign of GDPR if they cannot consistently and veritably protect sensitive information-be it in the cloud or on the mainframe. Businesses, today, are answerable to regulators and consumers alike, and must actively affect a gold standard of mainframe data security-which includes implementing security measures related to data testing, management and movement. GDPR is that standard. GDPR compliance has been an ongoing concern for enterprises worldwide, and many of our customers have asked us &quot;What should we do?&quot; At this point, enterprises managing data at scale should already have: appointed a Data Protection Officer, who will certify compliance with GDPR and other applicable data security laws, planned, defined and approved a budget for GDPR compliance that will cover additional resources required, including technology solutions and personnel, alerted specific teams on upcoming changes, engaged both &quot;sides of the house&quot; (mainframe and distributed) to develop a cross-platform strategy, and begun instituting a culture of compliance with a specific plan to train teams on company-wide GDPR policies. So what next? GDPR aims to re-establish individuals' control over their personal data. Complying with the regulation means knowing your data inside and out-where sensitive information is hidden, who has access to that data, what the best method is for proving compliance-to efficiently act</description>
      </item>
      <item>
         <title>Mainframe: The Good Corporate Citizen - Software @ Scale</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/mainframe-good-corporate-citizen</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/mainframe-good-corporate-citizen</guid>
         <pubDate>July 8, 2019</pubDate>
         <description>Making the mainframe a good corporate citizen by updating the mainframe to current corporate security standards. “It is not always the same thing to be a good man and a good citizen.” — Aristotle As Aristotle wisely observed, there is a stark difference between being a good man and being a good citizen. We can take this adage and apply it loosely to what we often see in Mainframe environments with hundreds of customers today. The Mainframe is considered a ‘good man’ from a security practices viewpoint, but we need to ensure that the mainframe is a good corporate citizen by updating and maintaining our mainframe security management practices to the current security standards already in place in IT shops everywhere. The mainframe has always been considered a secure computing platform. The core of IT security practices today formed from mainframe best practices, such as an identity management solutions which provide the foundation of resource protection within the system. Back when lessons were learned and security practices were being defined, access to computing resources was done via terminals, emulated or physical, that were identified in a particular network architecture, with activity logged, users tracked and external access limited or impossible. But several things have happened since those nascent times. First, distributed platforms evolved and became significant targets for bad actors. As a result, over time these systems have adopted and been equipped with technology and methodologies to enhance their security. It is no surprise that many had their beginnings in Mainframe security management, and we see the now maturity of identity and access management solutions protecting distributed systems, logging and correlating activity, and tracing network access, to name a few of their defensive capabilities. Next, the mainframe evolved to meet the needs of today’s modern computing requirements, making its massive</description>
      </item>
      <item>
         <title>IT Operations. How will they help my business?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/it-operations-how-will-they-help-my-business</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/it-operations-how-will-they-help-my-business</guid>
         <pubDate>April 17, 2017</pubDate>
         <description>By Jeff Henry, VP Product Management, CA Technologies In my previous blog in this series, I introduced the topics of AI, machine learning and predictive analytics. And I shared my belief that it's no longer necessary to engage a data scientist to enjoy the benefits of machine learning in IT operations. This time, I'm going to take a closer look at some of the specific ways machine learning and predictive analytics help enterprise IT departments overcome some of their biggest challenges. Intelligent Opportunities Let's start with a core platform: the mainframe. Mainframes remain mission-essential in today's app economy, but the skills and know-how to manage them are harder and harder to find. And yet, 55% of apps touch a mainframe, including those designed to process credit card and airline transactions, and more. Mainframe uptime is absolutely fundamental to business performance €“ impacting everything from user experiences on mobile apps to customer satisfaction to your ability to transact with customers. What do these demands mean for IT operations teams? They mean pressure to mitigate risks so the mainframe environment stays healthy and always-on. And they create a critical need to prevent issues from occurring, and resolve them proactively if and when they happen. Reactive or proactive? Now think about your current mainframe monitoring and management. Is the emphasis on how fast you can react to problems once they've happened (MTTR), rather than anticipating issues before they bite? You're not alone. As mainframe systems and applications become ever more complex and system knowledge decreases over time, the time taken to triage problems and repair failures often increases. The challenge of retaining mainframe experts with the skills to deal with these failures is growing as they reach retirement age. And as if that weren't enough, the penalties for failing to meet business SLAs</description>
      </item>
      <item>
         <title>Bracing SAP for the Application Economy</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/bracing-sap-for-the-application-economy</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/bracing-sap-for-the-application-economy</guid>
         <pubDate>September 2, 2018</pubDate>
         <description>The Two Key Challenges Your SAP Environments Need to Overcome Does your SAP automation strategy sometimes feel like you’re holding onto an umbrella in a hurricane? Digital disruptors are turning up the heat, and those that fail to adapt, fail to survive. Staying ahead means searching out and implementing ways to maximize efficiency in enterprise process execution. Whether your business-critical processes run exclusively on-prem, or stretch beyond SAP and use a mixture of cloud and SaaS applications, both have the potential to falter. Can you afford the cost of inefficient or failing business applications? What Does Inefficiency Look Like? Sometimes it’s tempting to adhere to the old adage, ‘if it ain’t broke, don’t fix it,’ but in doing so we can let something become outdated and unfit for purpose. When something is enhanced, what was once the standard can quickly become comparatively slow, unreliable and inaccurate. Room for improvement can appear in a number of areas. Perhaps your processes rely heavily on manual handoffs, which create inaccuracies and delays, while requiring skilled resources to focus on mundane tasks. It might be that there’s a lack of visibility across all stages of a particular process, so it’s unclear where an error has popped up. Coordination could be out of line, so process steps are missed or go wrong. Even if everything seems fine with process execution at the moment, there are always ways to improve and reap potentially unrecognized benefits. It is not uncommon for organizations to witness vast holdups as development and testing regularly come to a standstill. Similarly, time can be wasted resolving misconfigured parameters, and non-production systems can be unavailable for days or even weeks. Furthermore, the sheer volume of sensitive data is forcing quality and security risks. The net result of all this is that talented staff</description>
      </item>
      <item>
         <title>Three Myths We Must Dispel When it Comes to Mainframe Security</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/three-myths-we-must-dispel-when-it-comes-to-mainframe-security</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/three-myths-we-must-dispel-when-it-comes-to-mainframe-security</guid>
         <pubDate>February 5, 2016</pubDate>
         <description>We dispel some of the common myths around mainframe security and why management is key to keeping your organization's data safe. There are many who uncritically hold the belief that the mainframe z/OS system is inherently secure, without additional attention or effort. In reality, it's more accurate to say that it is the &quot;most securable&quot; platform, but is the most secure platform only when appropriately managed. So what is really going on? Remember classic Hollywood movie scenes of evil hackers getting into the CIA's mainframe? The Hollywood scene is certainly a dated one and still far-fetched. Fast-forward to today's reality about security in the application economy and the services put forth by a highly interdependent and complex hybrid infrastructure in the data center. Myth 1: The Mainframe has never been hacked Okay so yes €“ the mainframe remains the most &quot;securable&quot; platform and the inherent capabilities of the zOS platform and use of external security managers (ESMs), including CA Top Secret, IBM RACF, and CA ACF2, has much to do with this reputation. That said, existing and new controls need to be applied and monitored to maintain genuine data protection. There have been three publicized instances of mainframes being hacked technically, to date. The most prominent is the well-known Logica breach in Sweden, where massive data loss occurred at the Swedish Central Administration €“ not because of the mainframe itself but due to weak password rules and elevated access. I t's a great lesson that &quot;security is security&quot; regardless of the platform. Add to the mix insider-threats, social engineering, MF-experts' retirements and skill gaps €“ organizations now have the perfect storm where the unintentional oversight can happen and leave critical data and resources exposed. Myth 2: Mainframe data stays on the mainframe In the application economy and today's modern data</description>
      </item>
      <item>
         <title>What a Hearty Breakfast Can Teach us About Full-Stack Monitoring</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/what-a-hearty-breakfast-can-teach-us-about-full-stack-monitoring</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/what-a-hearty-breakfast-can-teach-us-about-full-stack-monitoring</guid>
         <pubDate>June 10, 2018</pubDate>
         <description>Using monitoring and analytics to deliver top notch digital interactions with customers. A full stack, that's what I like. Yes, this is a technology blog and I am writing about pancakes. Pancakes are good. Flapjacks. Even crepes (those really thin pancakes). All good. But when you need a full stack and only get a half stack, or even worse a single stack, it can be pretty disappointing. That's true whether we are talking about delicious pancakes or the monitoring and analytics tools that IT Operations Management (ITOM) professionals use. To push the food analogy a little further – with pancakes, the stack is made up of very similar things. A pancake is pretty much a pancake. But in the world of monitoring and analytics, the elements of a given digital delivery chain can be quite different. These elements include the application itself, the microservices, cloud services or APIs it uses, gateways, switches, middleware and connectivity layers, servers (both physical and virtual) and storage, and so on. It makes senses that monitoring tooling need to take into account the uniqueness of each element: application monitoring for applications, infrastructure monitoring for infrastructure and networking monitoring for networks. But when your organization seeks to deliver top notch digital interactions with customers that will differentiate your business, there are three key things you'll want to consider. Avoid the one trick pony: A monitoring tool that addresses only one element of your digital delivery chain can be useful by itself but leaves you vulnerable to a siloed approach with many individual solutions that neither integrate from a usage perspective nor cross-correlate. Beware of blind spots: Many monitoring solutions promise full visibility or full-stack monitoring but fall short of a true end to end solution. This leads to blind spots and lengthier response times to find</description>
      </item>
      <item>
         <title>Podcast: The Importance of APM Transaction Maps</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/podcast-the-importance-of-apm-transaction-maps</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/podcast-the-importance-of-apm-transaction-maps</guid>
         <pubDate>April 27, 2018</pubDate>
         <description>In this podcast, the third in our series, CA's Amy Feldman and Andreas Reiss will expand on their conversation from our second podcast and dive deeper into the importance of leveraging transactional maps and the power of CA APM's Assisted Triage for container monitoring.

They describe not only why transactional maps are an important part of our APM strategy, but also how CA APM, with Assisted Triage​, differentiates from other APM solutions on the market.



Stay tuned for our next podcast, where we will be discussing how assisted triage can be used to detect problems in container environments.  

Fun fact: Andreas Reiss was one of the creators behind CA APM's Assisted Triage. If you have questions for Andreas, please leave us a comment below!          
</description>
      </item>
      <item>
         <title>160 MHz Channels: The Wi-Fi 6 Superhighway</title>
         <link>https://www.broadcom.com/blog/160-mhz-channels-wi-fi-6-superhighway</link>
         <guid>https://www.broadcom.com/blog/160-mhz-channels-wi-fi-6-superhighway</guid>
         <pubDate>August 23, 2019</pubDate>
         <description>Wi-Fi 6 is the fastest, most-versatile Wi-Fi standard, and it was built to fully optimize the wireless ecosystem for all of our everyday devices. While it has faster peak data rates, the biggest upgrade consumers will see is when there are many devices simultaneously trying to get on the air, whether it's in the connected home, in cloud-enabled offices, in hospitals and schools, or at public events in stadiums and conferences. The average user will see four times higher throughput in these crowded environments along with better power efficiency, which boosts device battery life. These benefits of Wi-Fi 6 will be fully unleashed with wider bandwidth from contiguous 160 MHz channels. Hello, OFDMA and MU-MIMO How does Wi-Fi 6 deliver these massive benefits? One of Wi-Fi 6’s key new features – Orthogonal Frequency Division Multiple Access (OFDMA) – greatly improves capacity and performance by enabling more simultaneous connections and more efficient use of spectrum. It also increases spectrum capacity by slicing channels into smaller pieces, which together host multiple devices at the same time. Think of it as a truck: Legacy Wi-Fi is like a pickup truck, only letting you haul a limited amount of data. OFDMA is like a tractor-trailer, enabling you to haul huge amounts of data. Wi-Fi 6 users will see lower latency, faster data rates and more devices online. MU-MIMO, in turn, increases channel capacity by enabling Wi-Fi 6 routers to transmit to multiple devices using all available streams. For example, the throughput from a four-stream router to a two-stream client is limited by the client to half of the router’s capacity. With MU-MIMO, the router can use all four transmit streams to send data to additional devices simultaneously, resulting in a significant increase in overall throughput. Nice to meet you, 160 MHz-wide channels With 160</description>
      </item>
      <item>
         <title>Building a bridge between mainframe and the next developer generation</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/building-a-bridge-between-mainframe-and-the-next-developer-generation</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/building-a-bridge-between-mainframe-and-the-next-developer-generation</guid>
         <pubDate>July 10, 2019</pubDate>
         <description>The time could not be better for software development on the mainframe. With digital transformation, more and more emphasis is given to personalized, smooth and secure customer experience. Backend systems play an important role, including mainframe transactions and data. It truly is a new age for the mainframe. Yet, you may find it challenging to get everyone on your teams excited about the opportunity. Do some of your developers constantly complain about cumbersome tools and inefficient workflows on the mainframe? Are you hoping in vain to see a bit of enthusiasm in their eyes when they open the green screen and start editing their code in Endevor using ISPF? Chances are you are looking at the new generation of developers raised using modern and often open source tools such as Git, Visual Studio Code, Eclipse Che or Jenkins. They want to make their lives easy and they have little to no patience for anything that seems old or clumsy compared to the user experience they are used to and consider standard. Does that sound familiar? New generation of developers If you want to learn more about this new generation, one of the most comprehensive sources of information is the annual Stack Overflow survey . Don’t expect to learn much about the highly skilled mainframe programmers who have been contentedly using 3270 for their entire career in this survey . Over 50% of professional developers responding to the survey fall into the age category 25 – 34 years old, and only 1.6 % are over 55 years old. One of the main takeaways from this particular segment of developers is that they put great value in how they go about their work. In the survey, the second highest priority for developers looking for a job was “languages, frameworks and other technologies</description>
      </item>
      <item>
         <title>Say Hello to DevSecOps: Speed, Trust and Reliability on the Mainframe Platform</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/say-hello-to-devsecops-speed-trust-and-reliability-on-the-mainframe-platform</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/say-hello-to-devsecops-speed-trust-and-reliability-on-the-mainframe-platform</guid>
         <pubDate>August 14, 2017</pubDate>
         <description>Take a shift-left security approach across all phases of the SDLC. In the Modern Software Factory, organizations are transforming their culture, processes and tooling to accelerate the delivery of mission essential applications. Queue DevOps, the integration of development and operations to speed time to market, enhance customer experience and improve operational efficiency. Specifically, DevOps revolves around agile development,continuous testing, continuous deployment and agile operations. Often employing a toolchain, DevOps enables organizations of all sizes to build, test, operate and deploy software code more rapidly, including to the mainframe. DevOps methodology integrates both Development and Operations experts into a team that focuses on the application, rather than the system. This focus requires streamlining tooling and improving collaboration across the enterprise, but also raises the question: where does security fit into the DevOps lifecycle? The simplest answer, security needs to be everywhere in DevOps, just as it is in all modern IT. Baking security into every aspect of design, development and deployment helps ensure that security is built, quite literally, into digital applications from the outset. Consider Gene Kim's 3 underpinning principles of DevOps: systems thinking, continual experimentation, and in particular, feedback loops. Feedback loops in the SLDC bring development, operations and security experts from respective teams together, working as one, as opposed to traditional silos. This approach more easily embeds security principles throughout application designs and team behaviors, safeguarding against external threats, removing internal threats once identified, and improving the trust between teams as they iterate code and processes agilely. Welcome DevSecOps, the integration of development, security, operations and the mainframe working together as one. Here are some quick ways to begin your DevSecOps journey today: Security Principles: Before even writing the first line of code, it is important to understand the security standards in your company and industry, and ensure</description>
      </item>
      <item>
         <title>A Couple Ways to Reduce Your MLC Costs</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/a-couple-ways-to-reduce-your-mlc-costs</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/a-couple-ways-to-reduce-your-mlc-costs</guid>
         <pubDate>February 14, 2018</pubDate>
         <description>Mainframe is a very secure and reliable platform. And it is the most cost-efficient when you consider the millions of transactions that touch the mainframe every second and the scale it delivers. Nevertheless, CIOs and mainframe stakeholders are rightly concerned about economics and continuous cost-reduction and optimization. The easiest way to get a handle on mainframe costs, is to pay attention to MLC. Various studies, including Forrester, cite that MLC consumes a third of the software budget. IBM's Monthly License Charges (MLC) often consume a third or even more of the Mainframe Software budget. (Patrick Bartrick, Forrester) So what is MLC? Here is the definition from the IBM website: MLC is a recurring charge that is applied monthly. This charge includes the right to use the product and provides access to IBM product support during the support period. An IBM pricing metric establishes both the prices and the applicable terms and conditions for IBM software products. IBM offers a variety of MLC pricing metrics to meet the diverse needs of our mainframe customers. Simply said, MLC is a sort of pay-for-what-you-use type of charging, that is based on the highest peak of CPU use in the month, computed using a smoothing average algorithm (4HRA). So we want to optimize the peaks and usage for the most important workloads and shift workloads to different times or LPARs, so you don't incur an overage or penalty. How do you reduce MLC costs? Read the 2 tips below. Balance Your MSU capacity: Process Your Workload at the Lowest Possible MLC Costs. The basic idea is to move MSUs from one LPAR to another, if and when needed, to avoid a CPU peak. That's easier said than done €¦ This idea requires constant monitoring of the LPARs and their available MSUs, analyzing of the</description>
      </item>
      <item>
         <title>Are you delivering systems of trust this holiday season?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/are-you-delivering-systems-of-trust-this-holiday-season</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/are-you-delivering-systems-of-trust-this-holiday-season</guid>
         <pubDate>December 13, 2017</pubDate>
         <description>



Retailers: Here's one less thing to worry about during the IT freeze period


Despite the fast and furious introduction of cloud services, the mainframe has held steady as an integral part of retailers' hybrid data center. Brick-and-mortar retailers (like Macy's, Walmart, and H&amp;M) that rely on mainframe technology understand that it's crucial they keep the customer trust they spend decades building as they expand into the digital marketplace.

Behind the scenes, the mainframe is mission essential, powering 18 of the top 25 retailers' data centers and storing 71% of corporate data. There's a reason for that: IBM's latest model can power 12 billion encrypted transactions per day. And, it runs pervasive encryption at cloud scale 18x faster and at just 5% of the cost of x86 systems. In addition to supporting thousands of transactions, mainframes efficiently store data and connect to devices simultaneously for thousands of users.

As any retailer will admit, this kind of reliability is never more crucial than starting on Black Friday through New Year's day, when payment transactions (that run through mainframes) are at their apex: IBM Z is used across 87% of all credit card transactions and nearly $8 trillion payments a year. If your leadership hasn't always understood the value of mainframe in their environment, consider these key benefits.
</description>
      </item>
      <item>
         <title>Why the Time is Now to Modernize Development on the Mainframe</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/why-the-time-is-now-to-modernize-development-on-the-mainframe</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/why-the-time-is-now-to-modernize-development-on-the-mainframe</guid>
         <pubDate>October 13, 2017</pubDate>
         <description>A Playbook for Modernizing the Mainframe, Part 1 Earlier this year, my colleague Sreenivasan Rajagopal blogged on &quot;Cloud comes to Mainframe,&quot; highlighting the incredible opportunities for mainframe if businesses could manage the platform with the same agility as the typical &quot;cloud experience.&quot; This vision resonates incredibly well with my engineering team and our ongoing work to design a DevOps solution for our customers, which happens to be another popular topic that also brings the promise of greater business agility. Our goal is to bring both the cloud experience and DevOps to mainframe, and to revolutionize how the mainframe is experienced by professionals working in Development and Operations. I am incredibly excited to be sharing the journey of our team, so stay tuned over the coming months as we reveal piece by piece the playbook to modernize development on mainframe. Voice of the Customer All great design begins with the voice of the customer. Our customers told us they had three key objectives when enabling modernization: Make mainframe development attractive for the new generation of developers: Many organizations are facing a generational shift in their workforce – mainframe experts are retiring, ceding responsibility over mission-essential applications to a new generation of developers. These new developers have limited interest in becoming experts on the mainframe, and are even less inclined to adopt historical practices established by their predecessors.Insight: Businesses must therefore rethink application development for mainframe. Make mainframe development a part of the enterprise DevOps initiative: Line of business teams who are increasingly adopting DevOps principles are struggling to integrate mainframe development into their existing delivery pipeline, leaving mainframe development as a critical bottleneck.Insight: Businesses must therefore reconfigure their DevOps toolchains to support mainframe applications. Make nearly ‘zero touch’ and ‘zero cost’ development/test environments on the mainframe: Creating dev/test environments is a</description>
      </item>
      <item>
         <title>The Risk is in the Data -- But Where is the Data?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/the-risk-is-in-the-data-but-where-is-the-data</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/the-risk-is-in-the-data-but-where-is-the-data</guid>
         <pubDate>March 19, 2019</pubDate>
         <description>&quot;The safety of the people shall be the highest law&quot; said Cicero long ago, but even if safety is a basic concern and many efforts are taken to protect organizations and corporate systems, too often we read about data breaches. Every one of us has several examples in mind. Every one of us has updated a password here and there after we learn this or that system has experienced a data breach€¦ and those are the ones that we get to know. The successful data breaches are the ones that we never get to know. The reward that motivates the effort to make a breach is to have access to the gold of today -- the data. Protecting data is a must but it is certainly not a simple task. With huge and always growing amounts of data laying across multiple systems and applications, and more and more strict regulations and interconnected environments, it can look like a very overwhelming work. A way to attack such a problem is to divide it into pieces and to take one step at a time. The first thing to do is to decide what in the vast amount of your data needs to be protected against a breach. That is, the data that alone, or in combination with other information, can be dangerous for your business, for your customers or is simply protected by law. This data can be different from one case to another but for the purpose of this text we will call it sensible data. Now that we know what to protect we need to know where it is. This again can look simple, but it is not. Data are in the databases€¦ in the files you exchange to complete your processes, in the reports you produce for your executives</description>
      </item>
      <item>
         <title>The Art of Rollback, Part 3</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/the-art-of-rollback-part-3</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/the-art-of-rollback-part-3</guid>
         <pubDate>September 13, 2018</pubDate>
         <description>Rollback Strategies Through the Ages This post covers rollback strategies, but first let's recap what we learned from the previous chapters in the series: The Art of Rollback Part 1 and Part 2. I recommend you go back and read them if you haven't already! We learned that a good rollback mechanism cannot be designed without having an intimate knowledge of the application architecture, the nature of your components and their dependencies. Now that we know what we have to restore and in which order, the question is how? There are always different possible strategies available to restore your services. The only criteria for deciding which one to choose is speed. For this reason, the rollback must be automated, and the best rollback features available must be leveraged for each of your application components and technologies. The automation tool will be in charge of the orchestration of the different technologies involved in the rollback process. How Much Money Should You Spend on Rollback? Always go for the fastest process you can afford. No company can afford data loss, data corruption and service interruption. Never cut costs on this part. Trying to reuse old backup systems or mutualized backup, for example, is not advisable, as investing in new technologies can give you more reactivity with an immediate ROI. The budget of rollback implementation should be calculated at the beginning of your project, according to the cost of an error for the business, and not by looking at best solution price or providers' bundles available on the market. The acceptable cost of an error should not be estimated by dev or ops but by the business itself, as it can be a mix of unexpected factors. Some are part of the company plan and should not be shared internally. For example, it</description>
      </item>
      <item>
         <title>Data Lakes vs. Data Warehouses</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/data-lakes-vs-data-warehouses</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/data-lakes-vs-data-warehouses</guid>
         <pubDate>May 9, 2018</pubDate>
         <description>Data lake or data warehouse - what do they do and which one is right for you? Data warehouses and data lakes are two types of data storage repositories, each with their own functions and capabilities. James Dixon, the founder of the big data analytics company Pentaho, explains the differences between the two: &quot;Think of a data warehouse as bottled water-it's cleansed, packaged, and structured for easy consumption. The data lake, meanwhile, is a large body of water in a more natural state. The contents of the data lake stream in from a source to fill the lake, and various users of the lake can come to examine, take samples, or dive in.&quot; But what do these data storage systems do, and how can they be used in your organization? What is a data warehouse? A core component of business intelligence, the data warehouse is a central repository of integrated data from one or more disparate sources that is used for reporting and data analysis. Information in a data warehouse is typically organized so that pre-determined types of questions can be answered quickly. When the board makes a strategic decision on its future, or a call center agent reviews a customer's profile, the data is typically being sourced from a data warehouse. What is a data lake? A data lake is a storage repository that holds a vast amount of raw, structured and unstructured data in its native format until it is needed. While a hierarchical data warehouse stores data in files or folders, a data lake uses a flat architecture to store data. They do not require that the user plans the analyses they want to perform in advance, because the data is not organized prior to Which should you choose? Given the different uses for data lakes and</description>
      </item>
      <item>
         <title>An Automation Platform Disrupting Digital Transformation Norms</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/an-automation-platform-disrupting-digital-transformation-norms</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/an-automation-platform-disrupting-digital-transformation-norms</guid>
         <pubDate>July 3, 2018</pubDate>
         <description>The value of transformative and pervasive automation Four years ago, the Britain-based McLaren F1 team reignited a partnership with Japanese manufacturer Honda. The two had worked together some twenty or thirty years prior and dominated proceedings, annihilating the competition. The plan this time was to emulate the eight titles that they had won during their first relationship. And possessing the strongest driver lineup on the grid-Fernando Alonso and Jensen Button (already three world championships between them)-seemingly nothing could stop them. The ensuing years of their reunion were a car crash-often literally. The car was slow, unreliable and near-impossible to drive. Both McLaren and Honda were deeply embarrassed, blaming each other for faults, design flaws and a lack of cohesion. McLaren said the engine was at fault, Honda claimed it was the chassis. After three years, McLaren terminated the relationship and both Honda and McLaren undertook a huge reshuffle of their own technical teams. At this point though, they were the slowest car on the grid, and their respective drivers-bored of lining up at the back-were taking mid-season breaks to pursue other endeavors such as the IndyCar500. The brand damage to McLaren and Honda has been significant amongst racing enthusiasts and will be lasting if they cannot rapidly get back to the front as part of new teams. This story mirrors the situation in software of how quickly and how badly something can go wrong if different teams are not working on the same page. The teammates may have the same goal in mind, and while individually they might be amongst the most talented, collectively they fail. Customers (in the above analogy, drivers and sponsors) are unafraid to start looking at other options should failure ensue. Modern enterprises often attempt to kickstart their stalled revenue growth by extensively hiring developers and</description>
      </item>
      <item>
         <title>Automate Your Workload with CA Workload Automation AE (AutoSys®)</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/automate-your-workload-with-ca-workload-automation-ae-autosys</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/automate-your-workload-with-ca-workload-automation-ae-autosys</guid>
         <pubDate>March 5, 2018</pubDate>
         <description>This latest release adds new features and enhancements, including compatibility with the CA Automic One Automation Platform As our customers become increasingly familiar with the concept of the Modern Software Factory, we are continuing to look at ways we can help you build one. This means both developing innovative new products and enhancing existing solutions to ensure that you can gain the requisite agility, insight and security. We believe that automation is the key to unlocking a Modern Software Factory within your enterprise, and we're pleased to announce that with this latest update-version 11.3.6 SP 7-CA Workload Automation AE (AutoSys®) is easier than ever to integrate into your complete IT ecosystem. CA Workload Automation AE orchestrates business tasks so critical processes can run automatically and continuously-and as simply as defining, scheduling, and monitoring your jobs. The CA Automic One Automation Platform Following the acquisition of Automic Software, at CA we've ensured that it is possible to directly integrate CA Workload Automation AE with the CA Automic One Automation Platform, which enables connectivity across a variety of applications, databases, operating systems, and infrastructures, and extends event-handling capabilities. If you are a user of CA Workload Automation AE, you can now leverage features from both solution sets-allowing your business to benefit from easily adopting all of the tools it needs in the process of becoming agile, flexible, and scalable. We are constantly looking at innovative ways we can enhance our offering to our customers. With this new release, you can now take advantage of a host of other benefits listed below. Visit docops.ca.com to see all the enhancements in this feature-rich release. Cloud Utilization Many businesses use services on the cloud, and AutoSys now provides out-of-the-box support for both AWS and Azure Database Services, so connecting to remote database services is easier</description>
      </item>
      <item>
         <title>The Continuous Delivery Challenge in the Enterprise</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/the-continuous-delivery-challenge-in-the-enterprise</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/the-continuous-delivery-challenge-in-the-enterprise</guid>
         <pubDate>June 28, 2018</pubDate>
         <description>Getting Beyond &quot;It Works&quot; Continuous delivery (CD) as an engineering practice is taking hold at enterprises everywhere. Most forward-looking app developers' efforts rely on CD to one extent or another. Typically, that is in the form of a functionally automated pipeline for code promotion and some test execution. Some amount of the delivery work-such as database changes, provisioning or configuration management tickets, production signoffs, etc.-is still done manually. These forward-looking teams therefore have a CD pipeline that 'works' reasonably well. There is an old engineering adage that accurately describes the attitude many such teams have towards adopting CD: &quot;First, make it work. Then, make it work well. Finally, make it work quickly and efficiently.&quot; Today, enterprises are getting through the first and second phases of that adage in their CD adoption efforts, but they are going to want to reach the third eventually-and that's where the difficulty lies. Organizations in this position should start planning for phase three now to avoid the expense and disruption of bringing it under control later down the line. A Pipeline of Pipelines Enterprises attempting to transform their app delivery approaches typically rely on team-level efforts. As a result, they usually have app delivery pipelines in different areas of the business. Many of those current efforts have a very limited scope, only focusing on the basic functional tasks of the specific technical environments of the specific application system they support. Sometimes the focus may even just be on a subset of those environments. Furthermore, the pipelines are often duplicative of each other across teams-even if the technology stacks are the same. There is nothing but manual effort and spreadsheets coordinating the pipelines. This is a result of teams' natural, but narrow, focus on their functional needs. The narrow focus can create architectural problems when it</description>
      </item>
      <item>
         <title>How Zowe Makes Access to Your Mainframe Environment Easy and Secure</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/how-zowe-makes-access-to-your-mainframe-environment-easy-and-secure</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/how-zowe-makes-access-to-your-mainframe-environment-easy-and-secure</guid>
         <pubDate>April 7, 2019</pubDate>
         <description>(*) - available in Brightside Enterprise In discussions with existing and future clients, we hear over and over again the need for easy access to new technologies and at the same time, enterprise-grade features such as high-availability (*) and security. Zowe Open Mainframe Project enables a large community of developers to interact with z/OS in the same way that we are used to when working with cloud environments €“ setting up a sandbox, checking out code, using modern development tools and frameworks and creating robust automation that runs build and test tasks on z/OS. Zowe is an infrastructure facilitating access to the mainframe for developers, system operators, system programmers, security administrators, and all other people that manage mainframe environments. The Zowe components are created using current web UI frameworks and tools such as Angular and React. The web applications are integrated into Zowe Desktop and their APIs can be accessed via Zowe API gateway, which provides a single point of access and unified method for authentication. Zowe is built on top of the proven technologies that allow the development of resilient API service such as Netflix Zuul and Eureka. The applications can be developed in Java or Node.js platforms that are available and supported on z/OS. When it comes to security, Zowe is based on the facilities that are provided by z/OS and technologies available on the System z platform. Authentication and Authorization A Zowe user needs to provide valid z/OS credentials in order to use Zowe Desktop, Zowe CLI, or any Zowe API service. These credentials are validated using System Authorization Facility (SAF) interface on z/OS that routes the validation to a security product such as CA Top Secret® for z/OS, CA ACF2„¢ for z/OS, or IBM RACF®. The Zowe Desktop does not store the credentials in the browser.</description>
      </item>
      <item>
         <title>Why Automation Is the Gateway to Continuous Innovation</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/why-automation-is-the-gateway-to-continuous-innovation</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/why-automation-is-the-gateway-to-continuous-innovation</guid>
         <pubDate>October 14, 2018</pubDate>
         <description>Automation lays the groundwork for growth and speed at GE Appliances Like a winding garden path ending at a locked gate, repetitive and complex manual processes can only get you so far. Unlocking this gate with automation invites your business to a world of new possibilities. Automation can be used to solve a variety of business challenges-such as managing workflows, orchestrating services and releasing applications at speed and scale-while freeing employees from time-consuming and tedious tasks. This might be why automation is being rapidly implemented across industries ranging from fast-moving consumer goods to government agencies. Kevin Price, Principal Infrastructure Engineer in Information Security, shares how automation has streamlined application deployment and brought innovation to app development at GE Appliances, one of the largest appliance brands in the world. Agile Applications for GE Appliances At GE Appliances, automation has drastically sped up the deployment of the company's application releases by optimizing and standardizing the software delivery lifecycle. &quot;We used to have a waterfall approach to application development. We would gather enhancements throughout a year, and it would take 12 months to deliver a project. As we've focused more on automation, we've shifted our project management methodology to be more agile,&quot; Price explains, which included bringing automation to the time-consuming parts of the deployment process: integration, testing and deployment. Continuously delivering application releases allows IT teams to better align their app updates with their users' changing needs, as well as work more closely with other areas of the company to understand and improve various operational tasks. &quot;We have freed up IT resources to work closer with the business side in terms of gathering enhancements,&quot; Price adds, ultimately providing services that help the company stay a leader in the market for appliances. Scaling the Wall to Scalability The biggest challenge GE Appliances faced</description>
      </item>
      <item>
         <title>The Importance of Identity in Your ALM Data Strategy</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/the-importance-of-identity-in-your-alm-data-strategy-rally-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/the-importance-of-identity-in-your-alm-data-strategy-rally-software</guid>
         <pubDate>July 22, 2019</pubDate>
         <description>In the first part of this blog series, we presented the analogy that the benefits of blockchain technology are the same benefits we seek to have in an ALM data strategy. This is potentially a multi-million-dollar analogy because there is a lot to gain throughout your product development flow once you tie in the key tenets and benefits of blockchain. Enter, the first benefit, which is identity. Every time I talk to a customer, I almost always witness a spoof of the old Abbott &amp; Costello Who’s On First skit. The simple act of trying to identify ‘the work’ that is being done in an organization, and by ‘whom’, turns into chaos. It is a very real thing in organizations today. This lack of identity around the work will evince ambiguity, uncertainty, and erode trust. We want the direct opposite result in product development. Blockchain technology lowers uncertainty during the exchange of value because it provides identity about What is being transacted, by Whom, and When. We will focus on these three W’s for the moment, yet identity certainly empowers us to clarity around the five W’s. In blockchain, transactions are linked together to provide a chronological history or audit trail. The same is needed in product development; an overview of the all the transactions made from inception to delivery. In the context of product development, the ability to quickly identify the pertinent information about the ‘work’ being planned and executed, in real-time, which means you can have quick conversations, make quick decisions; and the domino effect will be shortened delay times across the lifecycle, and tighter feedback loops. With blockchain, technology is used to eliminate the need for a third-party intermediary to prove the identity of an asset, and who/what/when the work for that asset was completed. Let’s think</description>
      </item>
      <item>
         <title>Self-Driving Cars to Self-Driving Data Centers</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/self-driving-cars-self-driving-data-centers</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/self-driving-cars-self-driving-data-centers</guid>
         <pubDate>October 4, 2017</pubDate>
         <description>Machine Learning and AI delivers on the self-driven promise Self-driving cars are making headlines every day with the vision of a car that runs itself, maintains itself, prevents accidents, and gives an alert when help is needed. Such a concept gives rise to further thoughts, such as, what would self-driving data centers look like and how would they change IT as we know it? A self-driving data center is not just a vision, as we are already progressing toward full self-driving capability with different levels of automation and machine learning-based solutions to improve agility, reduce costs, and focus on higher-value business outcomes. Ali Siddiqui, General Manager, Agile Operations at CA, and I first made this analogy in our previous blog, and the idea sparked great interest, especially regarding how an existing data center with mainframe, which is facing a skills shortage, can be gradually turned into a self-driving, autonomous data center. A Washington Post article by Aaron Cole of The Car Connection details five levels of self-driving cars - all of which have a direct correlation to the growth and development of the autonomous data center, automated leveraging machine learning and artificial intelligence: Level 0: This is our starting point. It’s what most of the human race drives today – cars with no automation, where the person at the wheel is in complete control. Historically, this has described the corporate data center perfectly. IT operational tasks are done by humans be it troubleshooting or root cause analysis across a complex technical stack. For example, with a Level 0 data center, if a server threshold runs out of capacity, you have to figure out if it’s an application problem, a server problem, a data flow problem, etc. – then set about fixing it. Not infrequently, multiple subject matter experts have to</description>
      </item>
      <item>
         <title>Broadcom is Announcing Zowe Support — Why Does It Matter?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/broadcom-is-announcing-zowe-support-why-does-it-matter</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/broadcom-is-announcing-zowe-support-why-does-it-matter</guid>
         <pubDate>March 12, 2019</pubDate>
         <description>Zowe (www.zowe.org) is the first open source project on IBM zOS and was announced at Open Source Summit in August 2018; its three founding members are IBM, Rocket and Broadcom (CA Technologies at the time). Within seven months of the announcement, the first GA version of Zowe was released — reaching a major milestone with a 100% open source (Eclipse Public License version 2.0) ready for production. Zowe is an extensible framework for connecting applications and tools to mainframe data and applications. It aims to make mainframe an integrated and agile platform within the changing IT architectural landscape. This translates as a set of goals: Attract new people Demystify the Z platform Enhance integration and consumability Promote open community of practice Reduce learning curve Improve productivity Create modern, platform-neutral interfaces Build a cloud-like user experience Simplify architecture Reduce the operational overhead Improve co-existence Enable rich ecosystem of free and commercial solutions. See the video presenting Zowe: https://youtu.be/NX20ZMRoTtk Along with the GA of Zowe 1.0, software vendors such as Broadcom are now starting to deliver offerings based on Zowe. A perfect example, CA Brightside, was developed by Broadcom and awarded the most innovative DevOps solution of the year 2018 by DevOps.com. And now, Broadcom is offering enterprise-grade Zowe support for users. Now is time for customers to investigate deploying Zowe in their organization and get the benefits of transforming such infrastructure. To successfully adopt Zowe, customers have the following needs: Ability to share a single instance of Zowe with multiple products from multiple providers. This facilitates products consumption and co-existence through a shared API mediation layer, a shared Command Line Interface and a shared Web Desktop User Interface. A single support for all Zowe core components regardless of which applications or products are using it. This enables customers to deploy Zowe</description>
      </item>
      <item>
         <title>How to Better Support Testing in Production with CA Application Performance Management</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/how-to-better-support-testing-in-production-with-ca-application-performance-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/how-to-better-support-testing-in-production-with-ca-application-performance-management</guid>
         <pubDate>November 1, 2017</pubDate>
         <description>The term “testing in production” is polarizing. Mention it to some teams and they’ll say it’s sheer genius; a critical method for gaining realistic insight into app performance behavior. On the other hand others might scoff that it’s just dev and QA playing catchup -running tests they didn’t have time for earlier in the development cycle. But testing in production is more about supporting the realities of modern software development than it is about playing catchup. Yes, the term will probably get business folks and ops teams squirming, but in truth it’s an essential element of successful DevOps and continuous delivery initiatives. In the modern development world of API’s, cloud, containers, microservice architectures and the Internet of Everything, testing in production isn’t just common sense – it’s really the only way to progress. Massively complex distributed applications will perform in crazy ways, users will encounter problems you never anticipated, and cloud services no longer under your direct control can and will fall. And without a production testing strategy in place these problems challenge even the best operational capabilities. Worse, they’ll remove critical feedback loops and negatively impact decision making. But in truth, testing in production isn’t new; it’s been happening for years. Every time we slavishly update our Facebook pages and LinkedIn profiles we’re active participants in experimentation – even if we don’t know it. In enterprise IT, testing in production goes by many names and is also becoming well established. This includes: Blue Green Deployments – is a practice that reduces downtime risk by running two identical production environments (blue and green) only one of which is live (blue). As you prepare an app release for deployment, testing is conducted in the green environment. Once you have the green light (forgive the pun), routing traffic is switched to the</description>
      </item>
      <item>
         <title>Not Just “Another Meeting”: Lean Coffee for Beginners</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/not-just-another-meeting-lean-coffee-for-beginners-rally-software-formerly-ca-agile-central</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/not-just-another-meeting-lean-coffee-for-beginners-rally-software-formerly-ca-agile-central</guid>
         <pubDate>June 18, 2019</pubDate>
         <description>This is part 1 of my Lean Coffee series, written for folks who are just learning about the concept of Lean Coffee, or just starting to roll them out in their organization. For a deeper dive into the practice of Lean Coffee, read part 2: Caffeinating Lean Coffee to Maximize Team Productivity. For anyone who has worked in an agile environment (or an aspiring one), the concept of a “Lean Coffee” comes up quite often. Lean Coffees are “structured, agenda-less meetings” that help teams understand, organize, prioritize and collaborate on a focused set of democratically selected topics.” (LeanCoffee.org) Let’s explore the essential elements behind planning and executing an effective Lean Coffee. I’ll share how you can leverage these highly underestimated meetings to drive to meaningful ideas and real outcomes in your organization—all while having fun and communicating with each other. What is a Lean Coffee? What it is: An easy, lightweight way to collaborate and facilitate discussion. What it isn’t: Designed to be onerous, overbearing, or “just another meeting” on the calendar. Lean Coffee is a meeting that allows 3 or more people to address and openly discuss their ideas and topics of concern among a working group. Lean Coffees commonly consist of 4 stages: Contribution, Prioritization, Discussion and Accountability &amp; Action, that will drive towards meaningful outcomes in an organization. A good Lean Coffee format first starts with clear Working Agreements. Be clear about what you’re trying to accomplish and what you want to get out of the Lean Coffee. Don’t be afraid to set expectations with your group, and continually remind the participants about those expectations. Your role here is that of a facilitator. Make sure you keep an eye on the flow of the Lean Coffee and if you need to, keep it on track. Walk everyone</description>
      </item>
      <item>
         <title>Actively manage your products, or fail</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/actively-manage-your-products-or-fail-clarity-ppm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/actively-manage-your-products-or-fail-clarity-ppm</guid>
         <pubDate>April 10, 2019</pubDate>
         <description>Organizations have limited amounts of money available for investment, and that investment has to generate a substantial return - achieving all of the goals and objectives that have been set for the reporting period, while also moving the organization closer to achieving its long-term vision. There isn't room for investment dollars to be wasted on initiatives that don't contribute, or even on projects that contribute in a less than optimal way. That's where the integrated product portfolio comes in. It informs the decisions around the product portfolio by allowing leadership to ensure: Investments are focused on innovation and growth - that's the only way corporate performance can advance without throttling (cost reduction is important at times but will always have limited upsides because all costs have a floor). Investment is balanced across different elements of the portfolio. Investments must be distributed across products at different stages of the lifecycle - ensuring those products that are currently enjoying high levels of adoption and sales remain attractive to fund the development of new products while ensuring that declining products remain profitable by limiting investment. The same logic applies to distributing investments across different market segments and geographical regions. Project managers and teams have access to the full context of why their initiatives were approved. Instead of simply having a high-level explanation of project purpose the teams can see how their product or service fits into the enterprise product portfolio and how their work directly benefits the organization's ability to succeed. Of course, all of this is dynamic. Modern business is continuously evolving as technology advances, customer expectations increase and competitors drive innovation. The product portfolio must therefore be managed actively, ensuring not only that each product or service is accurately represented in terms of current positioning, but that future directions and plans</description>
      </item>
      <item>
         <title>Close Books Faster and Eliminate Risk with Workload Automation</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/close-books-faster-and-eliminate-risk-with-workload-automation</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/close-books-faster-and-eliminate-risk-with-workload-automation</guid>
         <pubDate>August 16, 2018</pubDate>
         <description>How workload automation helped a healthcare service automate their manual finance processes to increase efficiency, cut costs and reduce risk. Your finance department has the weight of the world on its shoulders: Tracking business financial performance, driving down operating costs, providing management reports to the board, keeping a close eye on compliance and financial risk. This burden is made worse by the way the weight is carried. Manual processes slow everything down and exhaust precious resources. They allow errors to creep in. Closing your books every month becomes a daunting and time-consuming task What if you had a solution that gave you full visibility and control over your financial processes? One that enables you to close your books more quickly-freeing up time and enabling the financial department to focus on revenue growth and other opportunities? Step forward workload automation. It automates the financial processes you're currently executing manually. This means you can create, execute and monitor finance business processing quickly and easily, across on-premises as well as private and public clouds. Errors and delays are eliminated from end-to-end processes, and your interconnected finance data flows to the front-line more quickly. The result? More time for analysis, faster feedback to the board and less risk of a poor audit statement. Moreover, accelerated financial closes enhance your interim financial reporting and shift your business from historical performance reporting to timely analysis that anticipates future trends. NHS Relies on Workload Automation That's exactly what a leading national healthcare organization has achieved using a workload automation solution from CA. The UK's National Health Service (NHS) is the largest publicly funded and state-run healthcare system in the world. In 2005, The NHS Shared Business Service (SBS) was established as a joint venture between the Department of Health and the consultancy Sopra Steria to support both</description>
      </item>
      <item>
         <title>Modern Application Monitoring: An Alternative to the ‘Swivel Chair Approach’</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/modern-application-monitoring-an-alternative-to-the-swivel-chair-approach</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/modern-application-monitoring-an-alternative-to-the-swivel-chair-approach</guid>
         <pubDate>April 9, 2018</pubDate>
         <description>Imagine if application users could see behind the serenity of the well-crafted (OK, sometimes cobbled together) user interfaces that deliver seamless user experiences envisioned in brainstorming whiteboard sessions, agile scrums, big room planning and/or flow diagrams? If they could, they would likely have an eye-opening moment as they grasp the modern application architectures and come to understand the range of technologies working across on-premise and cloud infrastructures, middleware, APIs and microservices to present data, ensure security, optimize availability and performance, all while being readily adaptable to address new business requirements. It’s a truly remarkable feat. Oddly, none of this happens seamlessly; instead, countless seams must be navigated and managed to give users a seamless experience. And, none of this complexity is delivered overnight. It accumulates over time, causing challenges for IT: How do we monitor performance across the full expanse of a modern application? How do we know our customers are getting the best value and experience from the application? Do we have the information needed to precisely isolate issues when they arise? Is the data in a meaningful actionable context for each group of stakeholders? Are teams using the data, remediating issues and learning over time? To deliver desired capabilities, modern application management systems bring disparate technologies together for developers, testers and operators. To deliver the desired performance and user experience, application performance management (APM) needs to work across technologies and present data and insights in a context meaningful for developers, testers, operators and product owners. APM must collate and provide user views with timely, precise, accurate and actionable insights. By interpreting, base-lining and differentiating between minor anomalies and major incidents, APM brings value to all levels of application ownership, not just to the developer who can interpret the raw data. All this is quite reasonable until one looks</description>
      </item>
      <item>
         <title>Mainframe - DevOps and Automation</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/mainframe-devops-automation</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/mainframe-devops-automation</guid>
         <pubDate>April 18, 2017</pubDate>
         <description>For many IT organizations, speeding-up release cycles on the mainframe is more of an aspiration than reality. But by applying DevOps best practices to the process of developing, testing and deploying mainframe applications, there are many opportunities to achieve the faster release cycles your business needs. In Forrester's words, &quot;DevOps is a set of practices and cultural changes, supported by automation tools and lean processes, that creates an automated software delivery pipeline, enabling organizations to deliver better-quality services and applications faster to ultimately win, serve, and retain their customers.&quot;[1] The DevOps opportunity For application development managers and team leaders, their most important priority is whether a platform €“ mainframe or otherwise €“ enables their teams to work at the speed the business demands. DevOps best practices accelerate application delivery by making it easier and more efficient to update or change code. It makes sure the same development, testing and deployment processes are applied throughout the business, regardless of platform. And by making the mainframe more accessible for all, it's a great way to encourage younger developers who may be less familiar with mainframes. Especially as the new generation of developers love finding ways that automation can help them work better. Here are three areas where I'd urge you to consider the benefits of applying DevOps best practice in order to accelerate application delivery. Development Modernizing your application development tools gives you a more efficient means of enforcing best practices and coding standards. This creates a better understanding of your enterprise applications throughout your teams. It also helps you integrate Agile development tools and Source Code Management, enabling total visibility and control throughout the development lifecycle. Click here to learn more Testing A DevOps platform makes it easier to access production-like test data without compromising data privacy. You get streamlined provisioning</description>
      </item>
      <item>
         <title>System of Trust: Why the New IBM Z is a Game Changer</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/system-of-trust-why-the-new-ibm-z-is-a-game-changer</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/system-of-trust-why-the-new-ibm-z-is-a-game-changer</guid>
         <pubDate>September 25, 2017</pubDate>
         <description>Today IBM announced IBM Z14, the next generation of mainframe. As a key partner and ecosystem vendor in the mainframe software community, here are some thoughts about this latest announcement. With the innovation that IBM has placed in enabling pervasive encryption for corporate data, enabling the Z14 to process up to 12 billion encrypted transactions per day€Š-€Šit is safe to say that what IBM has done here is to essentially make it possible for the Internet, at scale, to be pervasively encrypted, and therefore to be a System of Trust. And this is no small feat. Up until now, the degradation of performance associated with encryption on x86 has meant that less than 2% of corporate data is encrypted€Š-€Šgreatly increasing the threat surface associated with data breaches. With that in mind, here are three areas that we are particularly excited about to bring these innovations and its capabilities to our customers. Data-centric Security and Privacy The IBM Z14 will support businesses affected by new regulations that are swiftly coming into play, such as the EU-US Privacy Shield agreement and European Union General Data Protection Regulation (GDPR), which are focused on data privacy. From finding aWith that in mindnd classifying to alerting and inspecting, IBM and CA collectively provide unified enterprise security that helps increase an organization's compliance structure across new and existing mainframes. Blockchain Transactions The new innovation creates a system of trust, which combined with CA's standards-based management and security services for blockchain, offers usability, speed, data centric security and enterprise scale. This will be important when implementing blockchain, as ultimately it is an internet of trusted applications that supports digital identity management relating to PII (personally identifiable information), big data and the Internet of Things. CA is also contributing services to the Hyperledger project which will run on</description>
      </item>
      <item>
         <title>Deliver an Agile Enterprise with Automation</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/deliver-an-agile-enterprise-with-automation</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/deliver-an-agile-enterprise-with-automation</guid>
         <pubDate>January 9, 2018</pubDate>
         <description>The Agile Manifesto changed the world of software development, but extending its benefits to the wider enterprise requires automation The Agile Manifesto has been with us for more than a decade and a half – the maturation of the RAD and Extreme programing paradigms, it’s been the de facto approach to software development since its inception. Irrevocably altering our approach to programming and reshaping the technical landscape, its features are well known and in the automation blog we’ve discussed its concepts, characteristics and benefits in depth. So how does automation fit into the picture? The Manifesto seeks to overhaul existing project management techniques and enable the business to become more fluid. Automation extends this philosophy from a single app with multiple teams (which can simply be handled by continuous integration (CI) tools such as Jira or Jenkins) to enterprise standardization and composite/hybrid apps. From Continuous Integration to Enterprise Wide Automation Software is typically developed by several teams in conjunction. Individual agile teams may be responsible for a single component, but upon completion all the disparate parts must be able to pull together to create a functioning application; which is why CI came to be. CI demands that each change be merged with the other disparate parts and automatically build tested to ensure the whole project is in a shippable state. It’s tempting to stop here. But if the end goal is to attain continuous delivery (CD), agility needs to be brought not just to the deployment pipeline but to the enterprise as a whole. This is dependent upon small updates derived from close collaboration with the business, team members and customers. The self-organized teams of the agile methodology seek to deliver rapid-fire software iterations, which are continually evaluated and form a part of a whole release. But without a solid</description>
      </item>
      <item>
         <title>The Welcome Demise of the Rolling 4 Hour Average - Software @ Scale</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/the-welcome-demise-of-the-rolling-4-hour-average-software-scale</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/the-welcome-demise-of-the-rolling-4-hour-average-software-scale</guid>
         <pubDate>June 26, 2019</pubDate>
         <description>How Broadcom's New, Flexible Mainframe Consumption Licensing Model Liberates You! Introduced more than 20 years ago, Rolling 4-Hour Average (R4HA) licensing models have become increasingly complex, ponderous, and restrictive. Under R4HA models, the peaks and troughs in customer demand become a business liability to be managed – in real time, often by senior technical staff – who are forced to manage their workloads to satisfy a licensing model rather than meet the demands of their business. Broadcom’s new Mainframe Consumption Licensing Model (MCL) is our response to our customer’s requests for a more flexible way of managing mainframe workloads in this new age of digital transformation: where everything is connected and demand for IT resources are unpredictable &amp; volatile. Developed alongside IBM’s recently-announced Tailored Fit Pricing model, it acknowledges the renewed and growing importance of mainframes in today’s increasingly hybrid technology environments. If you are investigating the new licensing models – and you should be – here’s what’s changed, and how it applies to your shop. The MCL simplifies mainframe software licensing in two key ways – in administering workloads over time, and in changing how we think about development environments. The administrative and technical burden of administering the R4HA forces teams to actively manage their workload to avoid spikes in resource consumption. These spikes result in the monthly peak by which MCL and other licensing programs calculate monthly usage. Mainframe customers must cope by delaying important workloads or otherwise cap resources. This results in systems being designed and utilized in ways to conform to a licensing model rather than supporting service levels. In modern environments, this effort has become less manageable because workloads are often directly affected by demand which can be difficult to predict. A truly agile enterprise views this dynamism as an opportunity to be embraced, not</description>
      </item>
      <item>
         <title>Broadcom BroadR-Reach Ethernet Portfolio Brings Autos into Digital Age</title>
         <link>https://www.broadcom.com/blog/broadcom-broadr-reach-ethernet-portfolio-brings-autos-into-digital-age</link>
         <guid>https://www.broadcom.com/blog/broadcom-broadr-reach-ethernet-portfolio-brings-autos-into-digital-age</guid>
         <pubDate>December 7, 2011</pubDate>
         <description>Consumer interest in driver safety and infotainment features are at an all-time high, but automotive technology has not kept up with consumer expectations.Connectivity is edging its way squarely into the equation.

Collision warnings, comfort controls, infotainment and advanced driver assistance systems are emerging as compelling new automotive applications, increasing the need for bandwidth and connectivity within and between in-vehicle networks.

Today, Broadcom responds by unveiling the next generation in automotive connectivity.The Broadcom BroadR-Reach Ethernet portfolio is the broadest automotive Ethernet product portfolio in the industry, consisting of five devices including three highly integrated switches with embedded PHYs, and two stand-alone PHY solutions. All are designed to meet the rigorous demands of the automotive industry.

In addition, the portfolio is the first to enable 100Mbps over unshielded single twisted pair cabling, to increase performance and substantially reduce connectivity cost and cabling weight. Unlike existing Ethernet solutions that are closed (isolated in end-point applications using either LVDS or 100Base TX Ethernet cable), Broadcom Ethernet technology enables the migration to an open, scalable network.

This announcement follows the recent introduction of the OPEN (One-Pair Ether-Net) Alliance Special Interest Group (SIG).Established to drive wide scale adoption of Ethernet-based automotive connectivity as the standard in automotive connectivity, the SIG will address industry requirements for improving in-vehicle safety, comfort, and infotainment, while significantly reducing network complexity and cabling costs.Members include Broadcom,  NXP Semiconductors N.V., Freescale Semiconductor, Harman International, BMW, Hyundai Motor Company and Jaguar Land Rover. License to specification for BroadR-Reach is available to all interested OPEN Alliance members under RAND terms via a license from Broadcom. Visit www.opensig.org to learn more.

For more information on the Broadcom BroadR-Reach automotive portfolio, visit go.broadcom.com/ or check out the Broadcom demo at the Consumer Electronics Show, January 10-13, 2012.</description>
      </item>
      <item>
         <title>The #RealNews Behind the Broadcom-CA Technologies Acquisition</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/the-realnews-behind-the-broadcom-ca-technologies-acquisition</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/the-realnews-behind-the-broadcom-ca-technologies-acquisition</guid>
         <pubDate>December 26, 2018</pubDate>
         <description>In July of 2018, Broadcom announced its intentions to acquire CA Technologies. In the press release, Hock Tan, President and Chief Executive Officer of Broadcom, said: &quot;This transaction represents an important building block as we create one of the world's leading infrastructure technology companies. With its sizeable installed base of customers, CA is uniquely positioned across the growing and fragmented infrastructure software market, and its mainframe and enterprise software franchises will add to our portfolio of mission critical technology businesses. We intend to continue to strengthen these franchises to meet the growing demand for infrastructure software solutions.&quot; While those words look nice on paper, the acquisition is old news. Customers of both companies are now asking, &quot;What's in it for me?&quot; Broadcom believes in the future of the enterprise data center market - a belief that drove the pursuit of this merger. The companies' networking and storage businesses have grown rapidly due to industry demand and transformation initiatives. Companies must securely and reliably scale data centers to drive digital transformation initiatives and compete in today's marketplace. This merger gives existing customers of both corporations the opportunity to benefit from the natural synergy of Broadcom's industry-leading IT Infrastructure offerings and CA's industry-leading suite of mainframe solutions. I recently listened to Dez Blanchfield's podcast , Conversations with Dez. In this episode, he sat down with Greg Lotko, Senior Vice President and General Manager, Broadcom Mainframe Division, to discuss the future of the platform. https://soundcloud.com/dez_blanchfield/conversations-with-dez-talking-with-greg-lotko-svp-gm-mainframe-division-broadcom Let's dive into a few of the key takeaways. What Does the Acquisition Mean for the Market, and for Customers? CA Technologies is an industry leader in mainframe software solutions across application development, security, and ITOM, and that doesn't seem to be changing anytime soon. According to Greg Lotko, General Manager and SVP of Broadcom's Mainframe division, &quot;The</description>
      </item>
      <item>
         <title>The 2018 DevOps Enterprise Summit in London</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/the-2018-devops-enterprise-summit-in-london</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/the-2018-devops-enterprise-summit-in-london</guid>
         <pubDate>July 26, 2018</pubDate>
         <description>IT Professionals Rocked Their Apps with CA The first DevOps Enterprise Summit (DOES) of 2018 was held on the 25th and 26th of June at the InterContinental London-The O2 Hotel. This event provided leaders in DevOps at complex organizations around the world with the opportunity to come together to discuss the latest trends in DevOps and enterprise IT management. In the spirit of making your apps rock with continuous delivery and continuous testing, the CA booth featured information about automation and testing solutions from CA, and our representatives gave away Les Paul guitars to six lucky winners who were randomly chosen from the audiences at our Lightning Talks. The Key to Continuous Delivery More than 750 guests attended the summit, and the agenda included breakout sessions, plenaries and demos. Keynote speeches at the summit were delivered by industry experts and aimed at leaders of both development and operations. Duncan Bradford, EMEA CTO at CA, gave a keynote titled &quot;Your Roadmap to Continuous Delivery and Continuous Testing&quot; to a full house. In it, he discussed the software delivery challenges that enterprises face, what solutions exist to help overcome these challenges, and the future of software architecture, continuous delivery and the Modern Software Factory. A stream of the speech is available online. Other opportunities to stay on top of industry trends and talk with experts included Lightning Talks-short speeches on a range of topics, such as &quot;100,000 User Load Test in &lt; 10 Minutes&quot; and &quot;The Stairway to Continuous Delivery Heaven,&quot; which were held at the CA booth. These talks were a hit with audiences and preceded each guitar giveaway. Tune Up Your Releases Continuous delivery is at the heart of DevOps, and guests at the summit were interested in discussing the role of continuous testing to avoid the QA bottlenecks that</description>
      </item>
      <item>
         <title>Too much work, too little time</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/too-much-work-too-little-time-clarity-ppm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/too-much-work-too-little-time-clarity-ppm</guid>
         <pubDate>April 25, 2019</pubDate>
         <description>Seventy percent of transformation efforts fail.  It's a statistic that's been thrown around for 25 years, most recently cited by McKinsey. PMI reports that 9.9% of every dollar spent on projects is wasted - and that's an improvement on previous studies. There are many reasons for these failures, but organizational fatigue is a big one and it's getting bigger.

The problem is that the speed of business has increased, but the approach hasn't. As a result, planning is occurring more frequently, with more projects being approved, adjusted and cancelled, but it's still based on individual proposals from different business areas. At the same time, many organizations still approve way more projects than they are capable of delivering. This causes frustration, lost productivity and an overall sense of organizational fatigue - and the more frequent planning becomes, the worse it gets.

Organizations must be more strategic, even as they adjust their tactical work more frequently. Planning must change to align approved work with long-term roadmaps that guide the strategic direction of a product, service, or the entire business. Adjustments in the short-term must still contribute to progress on that strategy, and work that doesn't contribute should never be approved in the first place.

Organizations talk a lot about creating an environment where their employees can &quot;work smarter, not harder,&quot; but they still operate with legacy planning techniques that are anything but smart. Change those planning fundamentals and you'll go a long way to alleviating organizational fatigue.
</description>
      </item>
      <item>
         <title>The Power of “Defining Done”: A Simple Concept to Ignite Company-wide Change</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/the-power-of-defining-done-a-simple-concept-to-ignite-company-wide-change-rally-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/the-power-of-defining-done-a-simple-concept-to-ignite-company-wide-change-rally-software</guid>
         <pubDate>July 9, 2019</pubDate>
         <description>I see many people struggling to find the benefits of agility. There is so much noise in the market today, so many people telling you how to make radical changes to your organization, that it’s hard to know what to believe and what direction to move in. This blog is the first, of a multi-blog series, through which I will focus on sharing small changes you can make to create ripple effects of goodness in your organization. What is Definition of Done? Let’s start with the ‘Definition of Done’ (DoD). The Definition of Done is not Acceptance Criteria. Acceptance criteria are specific to a story and tell the person working on the story and those who test it how far they need to take it. Acceptance criteria should be specific to that one piece of work and should not be overloaded with things everyone has to do like security scans and unit tests. This is what standardized Definitions of Done are all about. As a leader in a technology organization, do you have a clear understanding of the state of done in your organization? What does “yes, that’s done” mean to you? Does it mean the same to everyone whose name is aligned to that work? It should. But don’t go overboard. Start with defining ‘Done’ for Stories, Iterations, and Releases. Most companies have legacy release criteria already defined. Pull that out, dust it off, and see how close you get with each release. If it’s pretty good, then you have a starting DoD for releases in your organization. If it needs a lot of work, set it aside and start with stories. I’ll create a future post regarding DoD for Iterations and Releases. Stories: Are we done yet? For any story in your organization, it’s important to identify what</description>
      </item>
      <item>
         <title>Oracle Retail: As Easy as Riding a Bike</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/oracle-retail-as-easy-as-riding-a-bike</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/oracle-retail-as-easy-as-riding-a-bike</guid>
         <pubDate>October 21, 2018</pubDate>
         <description>How to Get the Most Out of Your Oracle Retail Investment As a tool for transport, bicycles are masterpieces of efficiency: every part of a bike is visible and serves a purpose, and the controls are simple. Bicycles have revolutionized transportation, and the right tool can revolutionize your enterprise software systems as well. Oracle Retail can be a vital part of your business, but this solution can also be challenging to integrate with legacy applications and the rest of your enterprise software. Still, when Oracle Retail has visibility and control, the efficiency benefits it can bring to supply chain management, batch scheduling and customer service make Oracle Retail implementation worthwhile. And for organizations that already use Oracle Retail, there are ways to get more out of it. In a recent webinar, CA Product Marketing Manager Tony Beeston discussed the challenges of implementing this core retail solution and gave tips for how to optimize your critical business processes quickly with the help of automation. The implementation process is a journey, and building solid foundations is critical to its success. Having realistic expectations for the timeline of each phase of the project can allow you to get the early steps right, rather than put in a temporary solution that works in the short term, but never gets addressed later. If you have already implemented Oracle Retail and experience day-to-day operational challenges, such as accidental data loss or downtime caused by human error, automation can help you overcome them. Automating defined workflows provides visibility, control and efficiency over your processes by automatically integrating data with monitoring tools, showing you where bottlenecks exist and minimizing errors. Automation can also improve your point of sale and drastically speed up processes overall, shortening implementation cycles by 75%, which helps guide your business to a place of</description>
      </item>
      <item>
         <title>PODCAST: Managing Modern Wireless Architectures, A Discussion with Broadcom Product Management - AI-Driven IT Operations Management (ITOM) Blog</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/podcast-managing-modern-wireless-architectures-a-discussion-with-broadcom-product-management-ai-driven-it-operations-management-itom-blog</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/podcast-managing-modern-wireless-architectures-a-discussion-with-broadcom-product-management-ai-driven-it-operations-management-itom-blog</guid>
         <pubDate>July 26, 2019</pubDate>
         <description>




Amit Mohanty is a Product Manager within the Broadcom Enterprise Software Division and focused on AIOps solutions. He is passionate about creating solutions to improve the Network Operations (NetOps) experience for customers. Amit holds a Master Degree in Management and a Bachelors Degree in Engineering and has over 15 years of experience in the Telco and Hi-Tech industry.  He is based out of Hyderabad, India.
</description>
      </item>
      <item>
         <title>Accelerating the speed of innovation</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/accelerating-the-speed-of-innovation-clarity-ppm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/accelerating-the-speed-of-innovation-clarity-ppm</guid>
         <pubDate>February 11, 2019</pubDate>
         <description>In a recent study, McKinsey found that 84 percent of executives thought innovation was an important part of their growth strategy, but only 6 percent were satisfied with their innovation performance.

To be successful organizations must deliver solutions to their customers that are innovative, but they must also consistently deliver them in less time than their competitors. So how do you do that? How do you constantly innovate while reducing the time from idea to solution?

The answer is in how you plan. 

You must ensure your product development is always pursuing a well-defined, strategic, growth path. That's where the idea of product roadmaps come in. A strategic roadmap defines the broad direction your products will take as they evolve, providing guidance to short term project efforts.

Project teams can then focus on delighting customers' current demands, advancing the product along the roadmap, and delivering solutions that leverage current technological capabilities in ways that have never been achieved before. When product and project teams have to reinvent what innovation looks like with every release the chances of failure increase. When they can use roadmaps as their guide, the right solutions become much easier to define.

Of course, those roadmaps have to be defined in the first place. Organizations must invest in innovative product managers who not only understand their markets and customers, but who are also prepared to challenge accepted norms, asking &quot;why not&quot; whenever a new opportunity arises.

Innovative roadmaps, executed by innovative organizations will result in consistently innovative products and services that delight customers and drive sustainable value.
</description>
      </item>
      <item>
         <title>Hidden figures: How Mainframe and IT Heroes are Winning the Digital Race</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/hidden-figures-how-mainframe-and-it-heroes-are-winning-the-digital-race</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/hidden-figures-how-mainframe-and-it-heroes-are-winning-the-digital-race</guid>
         <pubDate>March 2, 2017</pubDate>
         <description>Why organizations tapping the business value of hidden mainframes are winning in the digital transformation era. Based on a true story, the film Hidden Figures follows three brilliant African-American women working at NASA who served as the brains behind one of the greatest space operations in history that galvanized the world in the early 1960s – the launch of astronaut John Glenn into orbit. Similar to the women behind the scenes in putting astronaut John Glenn into space, the mainframe is the hidden engine and th mainframe IT folks the hidden figures that power the world&amp;'s mission essential business transactions. Consequently, the mainframe is quickly becoming a source of revenue growth and innovation for forward-thinking companies. The first mainframes were instrumental in the launch of space flight and programs like Social Security. Fast-forward to the present, the z13 can process 2.5 billion transactions per day (or the equivalent to 100 Cyber Mondays every day, according to IBM). Today&amp;'s mainframes are mission-essential to businesses around the globe, including 44 of the top 50 banks and 90 percent of airlines, serving a very different economy. The mainframe is now the chosen platform for Blockchain and machine learning, transforming from a revenue-supporting machine into a revenue-generating engine, increasingly playing a central role in organizations&amp;' digital transformation journeys. Hidden irony This was the conclusion of a recent CA and IBM sponsored study with IDC to examine mainframe trends and value to organizations. The hidden irony is that like the women&amp;'s unrecognized mathematical skills in the movie, the study found that CIOs and IT leaders were unaware that the mainframe is the secret engine within the data center or even the cloud. For example, a healthcare organization noted that it is moving to the next step with IoT and Big Data: &quot;We have partnered with</description>
      </item>
      <item>
         <title>Cyber Security and Mainframe Security Essentials</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/cyber-security-and-mainframe-security-essentials</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/cyber-security-and-mainframe-security-essentials</guid>
         <pubDate>April 18, 2017</pubDate>
         <description>By Stuart McIrvine, VP of Product Management, CA Technologies There's no shortage of innovative cyber security startups promising smarter, better, tougher ways to keep the good stuff in and the bad guys out. But when I read about these pioneering startups, I sometimes wonder if business leaders and information security teams shouldn't be focusing closer to home. Fortress mainframe? It’s true that mainframe architectures are inherently more secure than distributed systems. It’s one reason the mainframe remains as important as ever for mission-critical workloads. But as mainframes are opened up to web and app-based endpoints and services, the risks of a mainframe data breach is something that every business should consider. Philip Young, co-founder of ZedSec390, identifies three big reasons why businesses mustn't neglect mainframe security. First, it’s a mission-critical asset, where up to 80% of enterprise data – including customer and transactional data – is stored. Second, the cost of a mainframe hack is potentially huge, in terms of brand damage, downtime, and regulatory fines. The new EU GDPR, for example, carries fines of €20m or 4% of global revenue, whichever is greater, for non-compliance in areas like data portability and data breaches. Third, Philip emphasizes the importance of including the mainframe in your overall enterprise security plan, for example in areas like penetration testing and vulnerability assessments. Focus on data It's very effective to take a data-centric approach to mainframe security. After all, you can't protect something if you don't know it's there. Especially since one study estimates that 54% of mainframe data is effectively invisible. A data-centric security model follows seven steps for compliance, access and alerts, based on your mainframe data and its associated risks: Assess compliance requirements and prioritize what needs to be done Identify where sensitive data is stored, how it's classified and who</description>
      </item>
      <item>
         <title>What broke the Matrix will revive the Mainframe experience</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/what-broke-the-matrix-will-revive-the-mainframe-experience</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/what-broke-the-matrix-will-revive-the-mainframe-experience</guid>
         <pubDate>October 19, 2017</pubDate>
         <description>A Playbook for Modernizing the Mainframe, Part 2 In the opening blog Why The Time Is Now To Modernize Development On The Mainframe, Jean Louis Vignaud kicks off our blog series with our goal to bring DevOps and &quot;the cloud experience&quot; to mainframe as a means for modernizing development on this mission-essential platform. Three primary stakeholders are critical to realizing a successful modernization effort, and in Part 2 of our playbook I will outline the keys to unlocking the full value of one stakeholder, the modern developer. Many of the customers I've spoken with are undergoing a disruptive generational shift in their workforce. As mainframe experts retire and cede their responsibility over mission-essential applications, businesses are left with the challenge of onboarding a new generation of modern developers. Modern developers are a diverse group, with responsibilities ranging from specialized back-end applications to web/mobile applications that touch the full technology stack. However, despite these differences, all of them share a similar lack of interest in becoming experts on mainframe, and are even less inclined in adopting historical practices established by their predecessors. The problem is choice In the iconic Matrix movie series, the protagonist Neo discovers that the human concept of &quot;choice&quot; is what befuddles the villain from creating the perfect trap. Makes sense. Enabling choice can be a difficult barrier to overcome because &quot;choice&quot; intrinsically entails greater complexity. Yet, this barrier is precisely what businesses must overcome in order to successfully hire and retain the best development talent out there. Modern developers want to hit the ground running, applying their highly coveted skillset without having to take on new competencies that seem irrelevant to the future of their career. The desired end-result seems clear. Businesses who succeed are the ones who allow their developers to use their preferred, best-in-class tools,</description>
      </item>
      <item>
         <title>At a Glance: BCM81724 400G reverse gearbox ideally suited for hyperscale data center and cloud infrastructure</title>
         <link>https://www.broadcom.com/blog/at-a-glance-bcm81724-400g-reverse-gearbox-ideally-suited-for-hyperscale-data-center-and-cloud-infrastructure</link>
         <guid>https://www.broadcom.com/blog/at-a-glance-bcm81724-400g-reverse-gearbox-ideally-suited-for-hyperscale-data-center-and-cloud-infrastructure</guid>
         <pubDate>February 6, 2019</pubDate>
         <description>HOLD THIS BLOG, per KHANH. -- SPC. Broadcom is first to deliver 8x56-Gb/s PAM-4 to 16x25-Gb/s NRZ forward and reverse gearbox, designed to enable next generation high-performance switches with PAM-4 I/Os to connect to the large existing ecosystem of switches and plug-in modules with NRZ data formatting. The BCM81724 can also be configured as an 8x56-Gb/s PAM-4 retimer to extend high-speed copper and optical links in modern networks. Features Benefits Applications Single-chip 8×56-Gb/s to 16×25-Gb/s NRZ reverse gearbox with 8×56-Gb/s PAM-4 Pass-Through mode PHY Supports forward error correction (FEC) High-density 10G, 25G, 40G, 50G, 100G, 200G, and 400G front-panel line-card applications Supports both the PAM-4 and NRZ data formats Interoperates with Broadcom ASIC and merchant switch silicon ASIC-to-module interface 16×25 Gb/s front-panel application On-chip clock synthesis is performed by a low-cost 156.25-MHz reference clock by high-frequency, low-jitter phase-locked loops (PLLs) Supports SGMII pass-through ASIC-to-module interface 8×56 Gb/s front-panel application Broadcom drives faster migration times to terabit switches and routers “With the introduction of switches such as the Tomahawk 3 with 56G I/Os that are critical to meeting the rapidly increasing bandwidth needs in today’s cloud computing and hyper-scale data center environments, the BCM81724 is essential to interface these next generation high density switches to the existing 100G optical module ecosystem,” said Lorenzo Longo, senior vice president and general manager of the Physical Layer Products Division at Broadcom. “Built with proven PAM-4 SerDes that are foundational to Broadcom’s state-of-the-art switch processor chips, both merchant silicon and ASICs, our 16nm PAM-4 Reverse Gearbox provides the most robust and essential bridge for the end-to-end solutions driving faster time to market for our customers and expanding bandwidth capacity of next generation networks.” “50Gbps PAM-4 is quickly becoming the standard interface for hyperscale data center and cloud systems. As new switches such as Tomahawk 3</description>
      </item>
      <item>
         <title>Empowering Automation Center of Excellence Initiatives</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/empowering-automation-center-of-excellence-initiatives-ca-automation</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/empowering-automation-center-of-excellence-initiatives-ca-automation</guid>
         <pubDate>July 18, 2019</pubDate>
         <description>I have Googled “Digital Transformation” and I got more than 40,000,000 results … pretty packed. In case it wasn’t already, it makes clear that most companies are still trying to figure out how to best use emerging technologies. IDC envisions that 55% of organizations will be digitally determined by 2020, pushing transformation initiatives and spending up to nearly $6T. In fact, as disruption threatens every market, the so-called digital transformation appears to be unavoidable for any organization. So, I am sure, you perfectly know about the digital transformation story: it is all about a greater level of business pressure. The pressure that drives technological, and organizational changes. In fact, why are you transforming? Simply because you need to innovate fast and stay ahead of the competition, constantly delight your customers and avoid churn. All that, dealing with speed and volumes that you never reached before. So, there are a few key areas to focus on when tackling digital transformation initiatives, to increase agility in an organization. Key areas to focus on when tackling digital transformation The first area is about managing and controlling business processes end-to-end. It didn’t used to be a challenge when all functions were centralized and integrated. But introducing multi-cloud and SaaS into your application infrastructure comes with a new set of challenges as you still need to stay in control of the whole business process execution. Another area is delivering an innovative digital experience to the market. With constant pressure to transform at scale, DevOps organizations need to create a robust framework of tools and processes to enable continuous delivery. However, coordinating tools and teams is often done manually or through ad-hoc scripting, which causes errors and delays that put business at risk. The last area can be seen as a consequence of the other two.</description>
      </item>
      <item>
         <title>Time For A New Generation?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ca-mainframe/time-for-new-generation-z15</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ca-mainframe/time-for-new-generation-z15</guid>
         <pubDate>September 12, 2019</pubDate>
         <description>I'm a car guy. In fact, my team tells me I use way too many car analogies. Maybe, but I find it useful as a proxy when talking about IT platforms and experiences. Some of you may have heard me talk about this before, but it merits repeating. Let’s take one of the most successful, longest running automotive platforms – the Corvette. Admittedly my favorite, so much so that I even traveled to the factory to build the motor for mine in 2012. Since 1953, each successive generation of the platform provided new capabilities and experiences for the enthusiast driver. Earlier this year, General Motors announced the C8 as the latest generation of the platform. It’s a game changer with a mid-engine design providing an unprecedented and exhilarating driving experience. I’ve read a lot about it and hope to drive one soon! Organizations with mainframes too are finding themselves in a position to upgrade. Today, IBM announced the availability of the IBM z15™, extending IBM Z as a secured and open hybrid multicloud platform, with new innovations across security, data privacy, and resiliency. One new feature that particularly caught my eye is IBM System Recovery Boost. Keeping with the car analogy, this would be like adding a nitrous oxide boost kit to my Corvette: after completing a pit stop, I can temporarily boost the output for a burst of speed to catch up. All z15s come with this feature for no additional HW or IBM SW charges. This is a great differentiator for the platform and we fully endorse this approach for SW charges. According to Ross Mauri, general manager, IBM Z “As clients address their mission critical workloads – the 80% of their workloads that are not yet in a hybrid multicloud – security, privacy, cloud-native development and resiliency</description>
      </item>
      <item>
         <title>Achieving AI-driven IT Operations</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/achieving-ai-driven-it-operations</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/achieving-ai-driven-it-operations</guid>
         <pubDate>August 8, 2018</pubDate>
         <description>A recap of the industry's first AIOps summit By: Laura Pianin, Communications Intern More than 2,000 people registered for the industry's first AIOps virtual summithosted by CA Technologies-an event that brought real-world expertise and hands-on guidance from industry leaders on AI, machine learning (ML), and analytics. Cutting through the hype of the AI craze Tom Davenport, the President's Distinguished Professor of IT and Management at Babson college, began the summit discussing how AI can bring augmentation with humans and computers combining their strengths to garner outcomes neither could accomplish alone. Augmentation is beginning to happen now as AI is being piloted and deployed most heavily in IT. &quot;In the past, in manufacturing, we talked about the factory. Now, in many cases, IT Operations is the factory. You just can't survive without it operating effectively and smoothly. So, I think there's not much doubt that AIOps is the way this is going to move,&quot; Davenport explained in his keynote. The type of AI and automation projects that usually garner the most attention are the &quot;moon shots,&quot; big ambitious implementations of AI/ML. However, Davenport sees most of the work in AIOps to be &quot;invisible&quot;-quietly but meaningfully improving operations step by step. AI: automating automation The &quot;moon shot&quot; projects may not be coming to fruition just yet, but AIOps efforts are already beginning to occur. Ashok Reddy, CA's Group General Manager of DevOps likens the landscape to a self-driving car. Although self-driving cars aren't on the roads today, many cars already have automated features-features that already increase productivity. The same goes for IT operations. Just as cars have started to implement blind spot warnings and automated parking, IT operations can automate anomaly detection and root cause analysis. This step by step process ultimately builds to the self-driving car or continuous AIOps with fully</description>
      </item>
      <item>
         <title>5 Ways AIOps Enables a Successful Black Friday</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/5-ways-aiops-enables-a-successful-black-friday</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/5-ways-aiops-enables-a-successful-black-friday</guid>
         <pubDate>November 19, 2018</pubDate>
         <description>Key use cases to help you keep (lots) of customers happy during your biggest events It's been said that as much as 50% of Black Friday sales will happen online. With the importance of Black Friday and Cyber Monday to your bottom line, it's paramount that IT Ops teams keep things running smoothly and efficiently. You don't want to let what should be your best day become your worst day through slow performance or system outages. Abandoned carts mean lost business and can negatively impact customer loyalty. The big question is: how do you make sure that you'll be giving thanks and not working overtime during this holiday season? AIOps Can Help AIOps is the use of advanced machine learning algorithms and AI techniques to analyze and act on big data from various IT and business operations tools. It helps you deliver great-performing services faster, increase the efficiency of your operations while helping you deliver a superior user experience. Said more plainly, it helps you identify and correct problems €“ often automatically. Let's take a closer look at how AIOps can help make your peak season a good one. Find the causes of problems fast €“ While it's good to be alerted when a problem occurs, you'll want to get to the root cause as quickly as possible. AIOps helps you understand the reason behind a poor user experience and give you the context of the problem so you can remedy the problem appropriately, minimizing user impact. Predict problems earlier €“ AIOps can give you smarter alerting that detects anomalies by sifting through data across various monitoring tools and applying algorithms that predict emerging problems when events are out of the norm, helping you avoid problems before they impact your customers. Identify the alerts that matter €“ Alert storms are</description>
      </item>
      <item>
         <title>Modern SmartNICs are critical for performance and efficiency in today's data center infrastructure</title>
         <link>https://www.broadcom.com/blog/modern-smartnics-are-critical-for-performance-and-efficiency-in-today-s-data-center-infrastructure</link>
         <guid>https://www.broadcom.com/blog/modern-smartnics-are-critical-for-performance-and-efficiency-in-today-s-data-center-infrastructure</guid>
         <pubDate>May 15, 2019</pubDate>
         <description>SmartNICs have become a critical component in today’s data center infrastructure. Their performance and versatility have helped IT managers improve performance, increase revenue and enable new application while at the same time reducing TCO and improving power efficiency. Host offload Modern SmartNIC are designed to run high performance networking, storage and management workloads. By offloading those workloads from the host CPU to the SmartNIC, infrastructure managers free up valuable CPU cores which can then be made available to new tenants for revenue, in the case of cloud providers, or in general to augment the software capabilities of the host. Examples of applications that are particularly well suited for SmartNIC offload include virtual switching (e.g. OVS), software defined storage agents, storage networking (e.g. NVMeoF™ and NVMe™/TCP). Those application run much more efficiently on a purpose designed SmartNIC ASICs with hardware accelerators than on general purpose host CPU cores. Offloading them to the SmartNIC therefore results in a significant increase in performance, and a reduction in power consumption. The figure below illustrates the SmartNIC offload concept. Bare-metal services By providing a clear, secure, provider-managed demarcation between the server and the network, SmartNICs enable the deployment at scale of bare-metal services. In bare metal services, cloud providers lease an entire physical server to their clients. Bare metal servers, as they are known, do not require a hypervisor and, as a result, deliver better performance and more flexibility for the tenant than a virtual server. Because cloud providers do not control the software that runs on the server, they need an alternative, secure way of provisioning and managing the server. They also need to ensure that the server does not compromise the integrity of their network. SmartNICs are the ideal tool to configure, manage and secure bare metal servers. The figure below illustrates how</description>
      </item>
      <item>
         <title>A Recap Of IDUG Brazil</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/a-recap-of-idug-brazil</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/a-recap-of-idug-brazil</guid>
         <pubDate>August 26, 2019</pubDate>
         <description>Last Tuesday, August 20th, Sao Paulo held one of the most important conferences for Db2 professionals, the IDUG – International Db2 User Group Conference. I, a Brazilian living abroad for the past three years, had the opportunity to represent Broadcom by speaking in two of the day’s sessions. I was also excited to have the opportunity to meet up with friends, customers and co-workers from the past. The event was held at the Sheraton WTC Hotel, located in a particularly beautiful part of São Paulo. Below it is a picture from my hotel room taken at night: During registration time, Broadcom’s booth was quite a success! Many came to talk to us (Antonio Couto and myself). They were also looking for our Db2 12 for z/OS Catalog Poster and interested in the Recovery Handbook and Reference Guide. The pictures below speak for themselves; all of the posters were gone very quickly. Broadcom had four sessions throughout the day. First, Denis Pereira from Broadcom in Brazil talked about Zowe. It was truly amazing to see how people are interested to know and learn more to use it in their shops. The audience actively participated, making comments, asking questions and generally interacting with Denis. It is amazing how simple it is from the user perspective, to extract data from a Db2 for z/OS table using the terminal with only a single command. During lunch, I got the chance to catch up with people I’ve known for a long time. Some I had mentored in the past and today have become amazing Db2 professionals. Some were customers I’d worked with when I was based in Brazil. They came to hug me, tell jokes, recall what we went through in the past together and catch up on what each of us is doing now.</description>
      </item>
      <item>
         <title>Privileged Users And Insider Threat — Mitigating Risk To Your Mainframe</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/privileged-users-and-insider-threat-mitigating-risk-to-your-mainframe</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/privileged-users-and-insider-threat-mitigating-risk-to-your-mainframe</guid>
         <pubDate>September 6, 2019</pubDate>
         <description>Regardless of which survey results you examine, there is no doubt that the biggest security risk organizations face today is the insider threat. And, with the mission-critical role that today’s mainframes play, we must pay close attention to the insider threat and the management of privileged users. Just because the insider threat is significant does not mean we should presume that a large majority of insiders are malicious. Insider threats come from many sources. Certainly, malicious insiders pose a threat, but often the threat originates from good employees making mistakes, or a valid system account being compromised by an external attacker. Faced with these significant threats, we must focus our security management practices at effectively mitigating the risks in this area. But there are several challenges that must be properly addressed. Often, there are too many privileged user accounts (allowing extensive access to system resources) with 24×7 access privileges. The necessary audit information to understand all account behavior may not be available — and when it is available, there is often too much data, making it difficult to determine risk levels. Organizations may try to limit the number of privileged accounts on the system, but this often forces sharing of these accounts amongst multiple users. This means sharing passwords, therefore increasing risks. Managing the threat from insiders requires adoption of best practices to address the risks and challenges that most organizations deal with on a daily basis. There are 4 main elements of an effective best practices approach: Assess and Secure Govern and Control Record and Review Operationalize Assess and Secure It is important to assess your existing security posture and, based on that assessment, implement the necessary security controls to mitigate the key risks. This involves identifying your privileged users and determining which of these users truly need this</description>
      </item>
      <item>
         <title>Alerts - When Actions Speak Louder than Words (on Consoles)</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/alerts-when-actions-speak-louder-than-words-on-consoles</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/alerts-when-actions-speak-louder-than-words-on-consoles</guid>
         <pubDate>October 11, 2017</pubDate>
         <description>Are you spending lots of time responding to false alerts and noise? If you are, then it’s no wonder a career in IT Ops is often considered a hard slog. Like sitting through a long and boring power point presentation. You know there’s some nugget of useful information, but finding it buried in a stream of wordy slides is next to impossible. Beware the Operational Dead Zone It’s a quirk of nature that we humanoids tend to zone out when things get dull and repetitive. Think about the last time you took a long drive. Like me, have you ever jolted back into the moment and wondered what the heck you’d been doing for the last twenty minutes. Driving of course, but how much can you recall? It’s the same in IT Ops where we can lose any sense of urgency and zone out. So how often do we game the system to make life tolerable? Like perhaps hacking up some automation that rejigs an alert threshold. That might sound like a fair cheat, but it’s not exactly fool proof, right? Avoiding Unnecessary Sleep Deprivation with Application Performance Management Having to address repeat problems at 3:00am sucks. No problem I hear you say, just caffeinate quickly and kick off a handy script that kills a few processes and reboots a suspect server. Than back to bed and forget about it until your next on-call rotation. But even if we fully document our efforts and update the runbook, do we really have a permanent solution? Of course not – we’ve just contributed to the problem by applying a band-aid fix, with applications limping from one problem to the next. But faced with stresses of modern IT operations it’s perhaps understandable that folks often cut corners. There’s just never seems enough time</description>
      </item>
      <item>
         <title>Broadcom drives infrastructure development for 10G fiber broadband</title>
         <link>https://www.broadcom.com/blog/broadcom-drives-infrastructure-development-for-10g-fiber-broadband</link>
         <guid>https://www.broadcom.com/blog/broadcom-drives-infrastructure-development-for-10g-fiber-broadband</guid>
         <pubDate>October 15, 2019</pubDate>
         <description>Nearly all global telecom operators are today in active deployment of fiber-based broadband services. Yet as they look to match competitive pressure for higher bandwidth, or to increase the coverage of their addressable subscriber base, they face daunting challenges to restrain capital expenditures, minimize operating expenses and construct an open, interoperable network. Broadcom recently announced two new product families that will prove instrumental in overcoming these challenges. A typical network Rarely do telecom operators have the luxury to conduct a true greenfield deployment, where the entire network, from the access infrastructure, optical distribution network, and consumer premises equipment, is entirely new. Instead, the common case is a massive set of commercial and technical problems in grafting new services onto legacy networks. The illustration below highlights the most common of these. Most telecom operators are evolving from a predominantly copper-based network based on twisted pair installed decades ago to one with a much higher density of fiber. The copper connections can include Central Office DSLAMs, cabinets installed in neighborhoods and multi-dwelling units situated in the basements of buildings. PON fiber may be the network backhaul for each of these plus the customer premise connection for FTTH. Business services, and in some cases mobile transport, must be offered over the same heterogeneous network. And all of this is delivered by a long history of legacy equipment, each with its own unique hardware and software architecture, feature roadmap, support capabilities and lifespan. Figure 1: Access Network Applications The fiber infrastructure Broadband today starts with fiber, and the technology of choice is 10Gb/s PON. Building on the success of GPON, most operators plan to offer the ITU’s 10Gb/s successor, XGS-PON, with wide deployments in the near future. To do this cost effectively requires PON infrastructure capable of high density, low power, and comprehensive legacy</description>
      </item>
      <item>
         <title>Network switch system design considerations in the data center</title>
         <link>https://www.broadcom.com/blog/network-switch-system-design-considerations-in-the-data-center</link>
         <guid>https://www.broadcom.com/blog/network-switch-system-design-considerations-in-the-data-center</guid>
         <pubDate>October 17, 2019</pubDate>
         <description>Evolving workloads within the data center are demanding increases in bisectional bandwidth. New endpoints are being introduced, such as dedicated AI accelerators and GPUs are being added to CPU fleets to support these new types of workloads, with correspondingly higher demand on the data center network. In recent years, endpoint speeds have evolved from 10 Gb/s through 25 and 50 Gb/s and recently as high as 100 Gb/s. The changing nature of the new workloads and desire to enable distributed workload placement require switches to provide lower tail latency at a higher network utilization, thus reducing the total cost of ownership of the network. Network switching provides a main point of bandwidth confluence within the data center. The concentration of traffic which occurs at the switches results in some unique engineering challenges to deal with high-density, signal integrity, power delivery and cooling. Although the power per unit bandwidth has been decreasing, offering increased networking efficiency, the total power per single switch element is increasing generation to generation of switch ASICs. As a result, as the total switch bandwidth scales, the thermal and power density also increases, which mandates improvements in heatsinks and associated thermal solutions, as well as the power delivery network. At a chip level, there are many challenges to overcome when designing high-bandwidth devices. The raw transistor speedup per process node is on the decline (Moore’s law of diminishing returns). The metal interconnect (wire) between transistors also does not scale down in size with process node. This places restrictions on the physical design of blocks so that they are not wire dominated. The maximum chip area remains constrained by the maximum reticle size and, for 2.5D devices, the size of the interposer. The chip and interposer sizes both affect the yield and therefore the cost of the chip.</description>
      </item>
      <item>
         <title>Why Hack? Get Your Org on Board with Hackathons</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/why-hack-get-your-org-on-board-with-hackathons</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/why-hack-get-your-org-on-board-with-hackathons</guid>
         <pubDate>September 10, 2019</pubDate>
         <description>After participating in a dozen hackathons at Rally, I have seen firsthand the tremendous value this dedicated innovation time brings to our organization. Since hackathons involve a time commitment, it can oftentimes be difficult to obtain team and leadership buy-in to conduct these events if the value and benefits are not widely shared and understood across the business. In this blog, I’ll share the overview and benefits of hackathons, to convince your teams to get on-board and help ignite this movement in your organization. What is a Hackathon? Hackathons at Rally are occasional 5-7 day investments when our engineering organization cancels regular deployments, meetings and planned work. During this time, the entire organization focuses on their hackathon projects. Projects can vary from learning a new coding language, to implementing a product feature, to fixing an issue that has been festering in the code—just to name a few. The only rules guiding Rally hackathons are: Follow your passion Demo your work to the organization Demos can either be conducted live or submitted as pre-recorded videos that we watch together. After demo, the team votes for their favorite hackathon project via a Google form, and we name a “winner”. Our most recent hackathon winners worked on creating a dark-theme user interface within Rally that was a hit with our team. Some other projects included a feature to “pin” commonly visited Rally work item times, and a feature to enforce rules before passing a work item from one state to another. These are just a few examples of innovative ideas that we may explore as possible additions to the product in the future. Why Hack? Exploring possible product feature ideas that are not part of the current product roadmap is only one benefit of hackathon. So what is the real value of making</description>
      </item>
      <item>
         <title>Cumulative Flow: The One Chart You Need to Know</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/cumulative-flow-the-one-chart-you-need-to-know</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/cumulative-flow-the-one-chart-you-need-to-know</guid>
         <pubDate>September 4, 2019</pubDate>
         <description>When I’m out talking to customers, I get a lot of questions about the work taking place in their organization: How much work can we do? How soon can we do it? How reliably will we deliver? It turns out, the answer to most of these questions comes from a close study of process flow. And it also so happens to be that a cumulative flow diagram (CFD) is a great measurement tool for flow. So what is a cumulative flow diagram? A cumulative flow diagram is just a chart that shows the number of work items moving through your process over time. Let’s break down a CFD piece-by-piece. On the y-axis, you have a cumulative work item count. Across the x-axis, you have some dimension of time – it could be days, weeks, months. In the example above, time is measured in weeks. Finally, there are what we call bands. Each band represents a unique process state and displays the total number of items in the process on that date. In any CFD, there are two important bands to pay attention to – departures and arrivals. In the chart above, the light blue band indicates items that have moved out of a process (i.e. deployed). This band will trend up over time, which gives the cumulative flow diagram its unique mountain shape. The other important band to keep in mind is the top line, which indicates the number of items arriving to your process (i.e. defined). Now that we have an understanding of how to read a CFD, the question that often comes up is what does it all mean? Well, a CFD is all about visualizing flow metrics, and there are three really important flow metrics to know: Work in process (WIP). If you draw a vertical line</description>
      </item>
      <item>
         <title>Recognizing Common Flow Issues</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/recognizing-common-flow-issues</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/recognizing-common-flow-issues</guid>
         <pubDate>September 19, 2019</pubDate>
         <description>In my previous blog post, I explained why Cumulative Flow is the One Chart You Need to Know when it comes to agile development practices. Now, I’d like to focus on some common CFD patterns, and how they can be an indicator of flow problems. Let’s take a look at a few examples. Mismatched Arrivals/Departures Mismatched arrivals and departures are probably the most common CFD pattern you’ll come across. This typically occurs when more work comes into the system than work being completed. To see this visually, you can draw a series of vertical lines, which indicate work in progress (WIP). Over time, the lines get taller. This is an issue because it indicates a fundamentally unstable process — as your WIP increases, your cycle time becomes slower. Mismatched arrivals/departures in a CFD The solution: Enforce WIP limits so that new tasks aren’t entering a flow state until other tasks are completed. Flat Lines Another common CFD pattern is flat lines. Flat lines indicate that no work is being completed. It’s important to note that a cumulative flow doesn’t really suggest why this is occurring, but it does show you that it is occurring so that you can ask the right questions of your team in a timely fashion. That way, you can assess the situation and see if there’s anything that needs to be done. Flat lines in a CFD The solution: Try to identify and resolve any blockers. Talk to your team about what’s holding up their work and brainstorm ways to solve it. Bulging Bands Bulging bands occur when one or more bands suddenly increases in thickness. Said differently, your WIP is increasing in a specific flow state. As WIP Increases, you get longer cycle times. What makes bulging bands unique is that there’s not always a</description>
      </item>
      <item>
         <title>What is Pair Programming? Benefits and Getting Started</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/what-is-pair-programming-benefits-and-getting-started</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/what-is-pair-programming-benefits-and-getting-started</guid>
         <pubDate>August 27, 2019</pubDate>
         <description>Have you ever worked on a coding project so closely and intently, that once you took a step back you realized there were glaring errors? Or perhaps, the code didn’t end up doing what you originally thought it would. In agile software development, it can be extremely beneficial to get someone else’s undivided attention to conduct an intensive code review. Pair Programming, also known as “pairing” helps do just that. What is Pair Programming? Pair programming is the collaborative practice of two developers working on one machine to generate code together. Person 1 is designated as the “driver” who operates the keyboard and mouse. Meanwhile, Person 2 acts as the “observer” or “navigator” who reviews the accuracy of the code, and analyzes the elements and flow of what is being presented. Benefits of Pair Programming Have you ever heard of the saying, “Two heads are better than one”? Well in the context of Pair Programming, it’s true! Here are 3 major benefits that teams can get from conducting these collaborative sessions. Team Building Opportunities: Pair programming is a terrific opportunity for developers to communicate and collaborate with one another. This is a great chance to connect with extended teammates who they don’t work with on a day-to-day basis. Building and strengthening these working relationships is important for any successful agile team, and can lead to increased transparency and camaraderie. Learn, Share, Gain Perspective: Pair programming promotes bi-directional knowledge-sharing. Every person on your team—including yourself— has unique experiences and skills that shapes their knowledge and working style from a development perspective. As a result, someone might know a tip, trick or have background context that might detangle a blockage, accelerate the progress of a project, or provide a completely different approach or resolution to a problem. Catch More Errors⁠, Sooner: Have</description>
      </item>
      <item>
         <title>NVMeOF telemetry on Ethernet switches</title>
         <link>https://www.broadcom.com/blog/nvmeof-telemetry-on-ethernet-switches</link>
         <guid>https://www.broadcom.com/blog/nvmeof-telemetry-on-ethernet-switches</guid>
         <pubDate>October 25, 2019</pubDate>
         <description>NVMe drive bandwidths are on the rise. PCIe Gen4 NVMe drives are now available on the market and drives using PCIe Gen 5 are expected to be available within the next two years. With the advent of new non-volatile memories, there is a need for high-bandwidth fabrics to interconnect these drives. Figure 1: Ethernet and PCIe port speeds While the speed of NVMe drives has been increasing at an impressive rate, Ethernet port bandwidth has been improving at an even faster speed. The chart above shows how Ethernet and PCIe bandwidth has increased over time. Over the last eight years, the Ethernet switch bandwidth has grown over 40 times. The cost and power per unit bandwidth has decreased dramatically with every generation. A large ecosystem behind Ethernet switches makes Ethernet an ideal candidate for a unified fabric for compute, networking and storage. Figure 2: Ethernet switch progression For an ideal storage fabric solution, the key parameters of interest are port speed, scale, cost and operations per second. Ethernet switches provide the required port speeds and scale with better economic— better than Fiber Channel. For network operations involving storage it is important to track Service Level Agreement (SLA) compliance. Critical SLA metrics are typically uptime, Input/Output Operations per second (IOPs) and latency. Continuous monitoring is required to measure the performance, identify hotspots and isolate faults. When the SLAs are not met, there is need to quickly determine whether the root cause is the application, server, network or storage. Typical Ethernet fabrics do not provide storage-related metrics such as IOPs, IO types (Read/Write), completion times, discovery and latency. The Trident 4 switch family has a compiler-driven, fully programmable architecture with a rich set of instrumentation features. The data plane of Trident 4 is programmed using Network Programming Language which provides a rich</description>
      </item>
      <item>
         <title>AIOps Silicon Insights delivers unparalleled network visibility and AI-driven remediation</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/aiops-silicon-insights</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/aiops-silicon-insights</guid>
         <pubDate>November 4, 2019</pubDate>
         <description>Our customers are experiencing challenges today with managing complex network architectures like SDN, SD-WAN and NFV to deliver an innovative, reliable and responsive digital experience. Cost is one major challenge. It is expensive to run network operations today and requires many personnel to architect, monitor, understand patterns and triage these new architectures. Recent studies show that an enterprise loses an average of $9k every minute during a data center outage. Some of our customers have reported up to $20M in losses to data center outages per year. Speed is another challenge. Today’s digital experience requires a new level of network operational responsiveness to be able to identify, diagnosis, react and solve problems quickly to avoid bad customer sentiment and mounting operating losses. It is critical to bring AI and ML through an advanced AIOps solution to network operations today to solve for reducing costs from operating, avoiding outages and to speed up root cause analysis for faster triage and operations. AIOps can eliminate hours, days and weeks spent on manual pattern identification of trend charts looking for any unusual activity in the network. It can enable advanced root cause analysis and anomaly detection through intelligent thresholding and alarm noise reduction. Furthermore, AIOps enables advanced predictions, correlations and automated network triage. AIOps can learn from analyzing network activity to automate, repair and tune the network for reduced operational costs and faster triage to deliver consistent and exceptional digital experiences. Figure 1: Clicking on the affected metrics tab displays the power of our Machine Learning (ML) algorithms used to identify anomalies in network behavior. We all understand that no one solution fits all problems, and network monitoring is no exception. Some next-gen use cases such as dynamic traffic engineering require granular visibility at flow or packet level in real time. Broadcom’s Inband</description>
      </item>
      <item>
         <title>Rally Delivers Industry-Leading Data Protection with Customer Managed Keys</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/customer-managed-keys-security-offering</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/customer-managed-keys-security-offering</guid>
         <pubDate>November 8, 2019</pubDate>
         <description>We are pleased to announce the upcoming availability of our industry-leading Customer Managed Keys (CMK) functionality. CMK is a premium security and privacy add-on for our Enterprise customers that allows customers to manage their own encryption keys for the most sensitive data stored in Rally. While Rally already provides high levels of security and encryption including encryption at rest and in-transit for all of our customers, CMK takes this to the next level by providing customers the option to manage and audit encryption in an individual subscription. What is CMK? Customer Managed Keys, or CMK, goes by a few different names in the market. Sometimes called Bring Your Own Key (BYOK), Enterprise Key Management (EKM), and Bring Your Own Encryption (BYOE), CMK is an architectural pattern that allows you, the customer, to use your Key Management Server (KMS) to manage the security of your sensitive data. With CMK you gain control over your data. This means you get audit trails showing access to your data and that integrate into your existing Security Incident and Event Management (SIEM) systems. It also means you have the ability to revoke access to that data at any time and for whatever reason independent of Rally. Finally, your data will be encrypted with keys that are completely unique to your organization. These aren’t shared by other customers nor are they visible to Rally’s own operations team. What is Driving CMK? At a high level, an increasing number of regulations are driving more stringent controls of data across a broader range of industries. Data such as consumer personal information, education information, health, and finance data are all impacted by the regulations. We understand the difficulties in complying with these regulations while using cloud services. Enterprise security and compliance teams also face increasing complexities and challenges in</description>
      </item>
      <item>
         <title>Mainframe skills needed? We got this.</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/mainframe-skills-needed-we-got-this</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/mainframe-skills-needed-we-got-this</guid>
         <pubDate>November 12, 2019</pubDate>
         <description>SKILLS CHALLENGE CONTINUES Imagine sitting in a room at a “U” shaped table along with approximately 35 people listening attentively to each presenter. I cringed as I heard these words from customers, again and again. It was a repeat of last year: “Mainframe skills continue to be our challenge.” “We need to backfill quickly as we have those retiring.” “We just can’t seem to find the right candidates.” “It takes too long to get new hires up and running – we need at least 5 years!” Yes. The talent issue is a recurring theme in the industry. However, I am excited to share our progress and hope it’s received as Broadcom actively trying to solve and moving the needle to close the gap. Since Broadcom’s acquisition of CA, I’ve taken on the responsibility of managing our Mainframe Education organization and Customer Advocacy Programs. Within Customer Advocacy, we have a program called “The Mainframe Strategic Advisory Council” (MSAC) where we gather North America and EMEA customer influencers and decision-makers a few times a year to make sure we’re incorporating customer feedback into our planning. The MSAC allows us to share our thoughts on strategy and direction, while more importantly, gain insights from our customers on how we can further help them in their business and IT initiatives. We’ve heard time and time again at this event that there is a growing concern with the Mainframe platform and skills being in high demand. We know that the Mainframe is critical in: Overcoming and dealing with large scale transactions Supporting thousands of users and application programs being accessed by many resources Managing terabytes of information in databases However, the skills situation is a barrier to innovation. And while the industry is making some moves to rectify, it has more work to do to</description>
      </item>
      <item>
         <title>Mainframe Database Transformation -- Why Risk Migration?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/mainframe-database-transformation-why-risk-migration</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/mainframe-database-transformation-why-risk-migration</guid>
         <pubDate>November 15, 2019</pubDate>
         <description>We sometimes have conversations with customers considering migrating their mission critical applications from the mainframe to other platforms. This may be due to issues around costs, skills, and perceptions that the technology is “old”. But as we know, the mainframe is a proven, reliable, scalable, and high performing platform, with optimizations such as zIIP specialty engines to reduce the cost of operations, and it turns out that it is keeping pace with technology. We can build upon the existing value already invested in the mainframe and continue to exploit the benefits of the platform. Is that worth the risk? Why mess with success? At Broadcom, we face similar decisions. Transforming the mainframe for Hybrid IT is the best decision for us to make it more of an integrated and agile platform. From a technological perspective, that means being open, frictionless and optimized. The mainframe needs to be open – all the latest tooling and technologies should be able to work across all environments, most importantly with the mainframe powerhouse We need to work on developing frictionless solutions that encourage rapid adoption and consumption of the capabilities the mainframe has to offer across ALL skill levels Through machine learning and automation, we need to help our customers optimize to get the most out of the mainframe platform - realizing the greatest possible efficiencies to maximize both human and system resources This is our focus across all our products here at Broadcom and it’s what we think about as we make investments to further our products, including the transformation of our mainframe databases – which ties back to our overall mainframe strategy. Transforming Databases for Hybrid IT By opening up and making our databases more extensible through the use of APIs and services, we are promoting application modernization by further enabling integration</description>
      </item>
      <item>
         <title>AI in PPM? Let's get real</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/ai-in-ppm-let-s-get-real</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/ai-in-ppm-let-s-get-real</guid>
         <pubDate>July 23, 2019</pubDate>
         <description>Artificial Intelligence or AI is one of those buzzwords that seems to generate way more coverage than it deserves. Everyone talks about it, but how many people can actually do it? Well, when it comes to project and portfolio management (PPM) software providers, Gartner puts it fairly bluntly – “PPM technology providers will market AI as an integral part of their product strategies sooner than their products will actually be able to deliver truly valuable AI.”

Clarity PPM is different. We don’t want to sell you empty promises, we want to demonstrate how we are actually using technology to improve your business. Here’s an example.

Resource management is hard, you have to find the right people for every task, with the right skills and the right level of experience. And they need to be available when you need them, for the period of time that you need them. Today, Clarity PPM offers you world class, data-driven resource planning and allocation engines that help you identify the best way to use your resources. That allows you to deliver as much work as possible in the shortest time possible.

Is that AI, no, not yet. But here’s the thing. It’s industry leading functionality – it’s helping you improve the quality of resource management today. And it’s based on the complete, accurate and timely data that drives all of Clarity PPM. And that’s the platform we’re building our AI on top of. As Gartner notes “AI will have a significant and very positive impact on PPM leaders and the PPM technologies they use” – it’s coming, and it’s coming soon.

Wouldn’t you want your AI enabled PPM solution to be based on the best possible platform?
</description>
      </item>
      <item>
         <title>Dealing with Dependencies</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/dealing-with-dependencies</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/dealing-with-dependencies</guid>
         <pubDate>December 11, 2019</pubDate>
         <description>Before we dive in, let’s address the elephant in the room: How do you define dependencies? While there are multiple definitions across agile organizations, what we’ve found at Rally is that you can generally separate them into two categories – sequencing and functional availability. Sequencing Most conversations I have with customers are around making sure that something gets done before something else. In terms of agile tooling, this is called sequencing. While I don’t want to minimize concern over this, what we’ve found is that this is more of a perceived problem than it really is. And here’s why. Agile software allows you to sequence activities within sprints. This allows for User Stories to be ordered correctly so long as the team (or teams) involved have the correct collaboration techniques in place. Essentially, each team can schedule work according to the needs of the team, or the needs of the collaborating team. Usually, each team has their own sprint board (e.g. Kanban or Scrum). When teams are co-located, the discussions between teams are fairly easy to instigate to make sure that delivery is synchronized. The problem gets far more complex when dealing with distributed development teams on a bigger scale. Scheduling the delivery of parts of one initiative that needs a part of a different initiative can be the cause of many tense meetings. If these are across multiple time zones and locations, you can expect migraines to ensue. This is where it starts to make sense to have visible connections between work items in a single tool that spans the entire organization. That way, disparate teams working on different projects can see why they need to get certain things done. Functional availability When the size of a chunk of work increases, we may be starting to talk about a</description>
      </item>
      <item>
         <title>Software Capitalization and Agile - The Problem</title>
         <link>https://www.broadcom.com/rally/software-capitalization-and-agile-the-problem</link>
         <guid>https://www.broadcom.com/rally/software-capitalization-and-agile-the-problem</guid>
         <pubDate>December 18, 2019</pubDate>
         <description>Not too long ago, conversations with customers about agility dealt with the Why and What: Why do I need to think about this agile stuff? What do you mean by business agility? But today, it seems that most customers understand the value of agility – they know they can build more software faster, with higher quality and better predictability, and above all, keep up with their customers to avoid disruption. They’re usually embarked on some level of agile transformation, and everyone speaks the vernacular. But it’s not about the 'Why' and 'What' anymore, rather all the nuances of 'How'. These conversations inevitably end up around roadblocks to business agility that customers are battling. Sometimes these roadblocks seem as numerous and unique as the companies themselves. But there are common patterns: lack of effective leadership, failure to think holistically across the enterprise, and inability to centralize/decentralize decisions correctly are three categories that we see a lot. These can all be challenging to address, because the behaviors that need to change are rooted in culture and habit, which are notoriously hard to modify. However, there is one roadblock to agility that we are seeing, and it falls outside these patterns of culture and habit. It is based on the simple need for modern organizations to effectively manage their finances by tracking capitalizable and non-capitalizable software development expenses – Capex and Opex. The thought is that the only way to do this is by tracking time – but tracking time is viewed as waste in an agile organization. So, what can we do? First, let’s talk about the problem. Why is software capitalization such a challenge for agile organizations? In the 80’s and 90’s, there was an explosion of software built for internal use. Software moved from something that companies purchased to something</description>
      </item>
      <item>
         <title>If your portfolio isn’t strategic, what is it?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/-if-your-portfolio-isn-t-strategic-what-is-it</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/-if-your-portfolio-isn-t-strategic-what-is-it</guid>
         <pubDate>March 14, 2019</pubDate>
         <description>One of the most significant areas in which a modern project management office (PMO) will need to make changes in the future is with its project and portfolio management approach. While the list of things that will ultimately need to be done is long, fortunately the place to start is with a few simple shifts in your current thinking: Move away from the mental model of demand management Regard all proposals as major investments of the enterprise’s valuable resources (that is, it isn’t just about projects anymore) Begin to practice radical transparency Use “contribution to strategy” as your mandate and implicit authority Part of the change that is occurring with the advent of digital business is that, increasingly, an organization’s portfolio of internal investment options is no longer regarded as just demand being placed on IT. It isn’t that demand for IT isn’t important (it’s still one of the most critical execution issues), but at the front end of the portfolio process, contribution to strategy matters more. Moving up the maturity curve from demand management to practicing true portfolio management isn’t an overnight activity. The first change we recommend is segmenting demand. Strategic investments do NOT belong in the same intake process as low-level service requests. Digital business requires having an intake process, supported by the right tool, that doesn’t treat a multi-million dollar investment proposal with the same level of gravitas as a 40-hour change request. To put it bluntly, the difference between a Level 2 maturity PMO (process driven, tactically focused) and a modern PMO (strategically focused) is the ability to get out of the weeds. The second change we recommend for moving beyond demand management is to move the portfolio management function into a separate department and retitle it something like investment portfolio office (IPO). Your goal</description>
      </item>
      <item>
         <title>Agile planning lets you innovate faster</title>
         <link>https://www.broadcom.com/sw-tech-blogs/clarity/agile-planning-lets-you-innovate-faster</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/clarity/agile-planning-lets-you-innovate-faster</guid>
         <pubDate>February 27, 2019</pubDate>
         <description>Agile planning starts with leadership, but it also has to reach the lower-level execution stages as quickly as possible. There’s a need to more closely integrate leadership and delivery functions in order to improve the quality of project delivery. This is a critical element of agile planning, and in this post I want to look more closely at how that integration occurs, focusing on how strategy drives execution. We discussed the concept of enterprise agility - the need to adjust and evolve strategy to respond to threats and opportunities in the organization’s environment. This results in strategy being a very fluid notion: While there should be directional consistency in the medium term, the specific strategic goals will evolve continuously as customer demands, market opportunities and operational necessities shift. This must result in similar ongoing adjustments of the projects that are the mechanism for delivering that strategy, in order to maintain alignment between the benefits being delivered and those that are required. For that process to be effective, it cannot involve all of the decisions being made at the strategic level. That would not only consume too much time and effort analyzing change, but it would also separate decision making from where the knowledge and understanding of the projects that need to absorb those changes resides - the execution level. Instead, decision making on the mechanics of the changes necessary to maintain alignment with strategic goals must exist within the teams that are delivering those projects. Project managers and their teams must be empowered to change project elements to ensure their initiatives still deliver “on benefit” even when that benefit has evolved from what was originally envisaged. This distributed decision making is challenging for both leadership and project functions. Leaders are relying on relatively low-level teams to make decisions that impact</description>
      </item>
      <item>
         <title>GUEST BLOG:  From sand to software – my vantage point</title>
         <link>https://www.broadcom.com/blog/from-sand-to-software</link>
         <guid>https://www.broadcom.com/blog/from-sand-to-software</guid>
         <pubDate>December 11, 2019</pubDate>
         <description>As the CTO of Extreme Networks, and given that part of my career has been in developing chips for communications equipment, I get asked from time to time, “Why doesn’t Extreme do their own networking chips like Cisco or HPE?” My Vantage Point here isn’t specific to Extreme. It is really the answer that most customers should think about, no matter who their network vendor is. Said differently, if a customer buys a product that is built on proprietary silicon and isn’t built upon merchant silicon, then over the course of the ownership of the product, they will recognize that they have made a strategic mistake that is going to cost them in terms of productivity and total cost of ownership at the detriment to their enterprise. (Think proprietary CPUs vs. X86 for instance. How’d that work out?) So how is that, you might ask. If you look at one of the aforementioned companies, their most recent product family is built as a derivative cost-reduction from a chip family that has been around for many years. Yes, there are a few new cute additions, but essentially the end switching product is built on a cost-reduced version of yesterday’s news. Does it work? Sure. It does a fine job of delivering the features of yesteryear, and its future trajectory will therefore be incremental enhancements to yesteryear as well; because that is the heritage – the DNA so to speak of this technology. Don’t get me wrong, it does have some solid features, but let’s dig deeper. The advantages of merchant silicon Let’s compare this to merchant silicon. First, a merchant semiconductor supplier has an inherent advantage over any product company’s internal chip team with the exception of unicorns such as Apple. In particular, the nature of a merchant semi supplier is</description>
      </item>
      <item>
         <title>Broadcom receives award for continued excellence in semiconductor industry</title>
         <link>https://www.broadcom.com/blog/broadcom-receives-award-for-continued-excellence-in-semiconductor-industry</link>
         <guid>https://www.broadcom.com/blog/broadcom-receives-award-for-continued-excellence-in-semiconductor-industry</guid>
         <pubDate>December 10, 2019</pubDate>
         <description>Broadcom is honored to receive the “Most Respected Public Semiconductor Company” award in the greater than $5 billion annual sales category at the 2019 Global Semiconductor Alliance (GSA) awards ceremony. The recognition is particularly significant to Broadcom in that it is awarded by mutual balloting of the GSA members to recognize peer companies for their vision, technology and market leadership.




Boon Chye Ooi, Broadcom’s Senior Vice President of Global Operations, accepted the award on behalf of all Broadcom teams and is pictured above. KS Pua, Chairman &amp; CEO of Phison Electronics (L) and Eric Starkloff, President &amp; CEO of National Instruments (R), are also pictured.

GSA established the awards decades ago to recognize the excellence of top-performing semiconductor companies worldwide.
</description>
      </item>
      <item>
         <title>AI is coming, finally</title>
         <link>https://www.broadcom.com/sw-tech-blogs/clarity/ai-is-coming-finally</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/clarity/ai-is-coming-finally</guid>
         <pubDate>July 11, 2019</pubDate>
         <description>It seems like AI – artificial intelligence – was always just around the corner, for my entire career. But now it's finally coming, and project portfolio management, or PPM, is one of the more unlikely arenas where it’s emerging. It’s not here yet, but it’s not far off, so let's explore exactly what’s coming.

“AI is about to revolutionize how PPM leaders leverage technology to support their organization.&quot; That’s the first line in Gartner’s latest assessment of AI in the PPM space, and they aren’t wrong. PPM solutions will become voice integrated with conversational AI. That will make interaction with the solutions not only more intuitive, but will also help to drive real value by eliminating the need to ask the exact right question in a reporting query.

Machine learning will feast on the massive amounts of data that PPM solutions maintain, delivering steep growth curves that will quickly improve the quality of estimates, risk assessments, forecasts and countless other areas of portfolio and project delivery. And robotic process automation will not only eliminate expensive, low value, manual tasks, it will also optimize workflows and improve efficiency in every aspect of the portfolio lifecycle.

These technologies are closer than you think. You’ll start to see them within the next 12 months, and within three to five years they will be commonplace. And that opens up a whole new level of performance that you can only dream of today.  But only if you choose the right PPM partner – one that understands AI and is committed to leveraging it to improve the quality of your business.
</description>
      </item>
      <item>
         <title>Broadcom introduces second generation dual-frequency GNSS</title>
         <link>https://www.broadcom.com/blog/broadcom-introduces-second-generation-dual-frequency-gnss</link>
         <guid>https://www.broadcom.com/blog/broadcom-introduces-second-generation-dual-frequency-gnss</guid>
         <pubDate>November 13, 2019</pubDate>
         <description>Who doesn't use GNSS these days? GNSS is the global navigation satellite system that encompasses GPS and all other satellite constellations in the world. GNSS technology is in every smartphone. It is used to help decide the best route to a given destination depending on real-time traffic conditions and road conditions. It enables you to share your real-time location with friends over messaging applications. It has really become part of our life. When the world thought everything had already been invented in GNSS, Broadcom introduced in 2017 the first mass market implementation of dual frequency: BCM4775. This chip makes use not only of the classic L1 frequency broadcast by every satellite, but also of the more advanced L5 signal broadcast by a subset of the satellites. The use of this enhanced L5 signal improves the accuracy of GNSS in an urban scenario, as it mitigates the main source of error: the reflections in the nearby buildings, also known as multipath. It also improves GNSS in an open-sky scenario, allowing submeter accuracy, a previously unmet performance bar in smartphones until now. Ever since, the BCM4775 has been adopted in flagship smartphones, smartwatches and fitness devices. Given the unabated need for better precision and accuracy, we have continued to innovate. We are excited to introduce our second generation dual-frequency GNSS solution -- the BCM4776. You may ask, “What’s the improvement?” These new chips will be capable of using the new BeiDou-3 constellation's B2a signals, which is the Chinese name for L5. This means that the second generation dual-frequency GNSS will be able to track 30 new L5 signals (60 percent more) with a significant impact on accuracy. And the benefit? End users will experience much higher reliability of the submeter accuracy that is inherent to dual-frequency L1-L5. Second generation dual-frequency GNSS will</description>
      </item>
      <item>
         <title>Broadcom is Value Leader in EMA RADAR Report for Workload Automation</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/broadcom-is-value-leader-in-ema-radar-report-for-workload-automation</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/broadcom-is-value-leader-in-ema-radar-report-for-workload-automation</guid>
         <pubDate>November 19, 2019</pubDate>
         <description>The latest EMA Radar report for Workload Automation is out and we are proud to once again be recognised as a value leader within this important sector for IT Operations. As Dan Twing comments in his introduction, the primary driver of change within Workload Automation has been Digital Transformation a mandatory journey for all enterprises that want to survive and grow in today’s competitive market. In fact, that was one of the primary drivers around the development of Automic. Automation provides a key ingredient for our Digital BizOps Platform which combines AIOps, DevOps, Value Ops and Automation Center of Excellence in a single unified platform to fuse business and IT, all aligned to create great business outcomes. The measurements for this research were changed since the last edition published in 2017, this just shows that although the market is 40 years old, significant investment is required to not just tinker with features within the product but add significant functionality to drive forward the use of automation not just in IT Operations but across the entire enterprise. Driving ever increasing value to the business by providing dramatic business outcomes as it adopts more automation. Today’s Workload Automation It’s been an exciting couple of years since the last report, two full releases of Workload Automation have been made available since the last report. The latest release brought the ability to deliver an Automation Center of Excellence to drive systemic automation across the business, a focus on companies Digital Transformation with Digital Business Automation as well as extending automation to AIOps by providing an integrated and seamless experience to automatically remediate issues that have occurred or are predicted to occur. Automic is not just about your workload automation, we have also brought the intelligent pipeline to drive your continuous delivery though Automic. The</description>
      </item>
      <item>
         <title>Fostering a Culture of Growth with Rotations and Developer-Swaps</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/fostering-a-culture-of-growth-with-rotations-and-dev-swaps</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/fostering-a-culture-of-growth-with-rotations-and-dev-swaps</guid>
         <pubDate>December 16, 2019</pubDate>
         <description>Since our inception in 2001, Rally Software has historically offered many opportunities for employees to learn new skills and explore cross-functional roles as part of the company’s investment to personal and professional development. And although Rally is now part of the Enterprise Software Division at Broadcom, the commitment to personal development opportunities still lives on to this day, specifically in the form of Rotations and Developer-Swaps (also known as Dev-swaps). What are Rotations? A Rotation is an experience in which a developer temporarily leaves their team to support another development team for typically one week in an iteration. At Rally, typically developers participate in Rotations and Dev-swaps, however, these experiences are not limited to developers and can include a number of roles and job functions. Also, although Rally conducts 1-week Rotations and Dev-swaps, the duration for other agile development organizations may vary. During this time, this person participates in the new team’s scrum ceremonies, pulls work from the team’s board and reviews pull requests. Something to consider before implementing rotations in your organization is that when someone participates, his or her original team will need to anticipate operating with one less person, which affects team velocity. What are Dev-Swaps? Dev-swaps are very similar to Rotations, although in a Dev-swap, two developers from different teams temporarily switch roles with one another, typically for 1 week in an iteration. With Dev-swaps, velocity for both teams remain unaffected since there is no change in number of team members. 5 Benefits of Rotations and Dev-Swaps Get a new perspective. Rotations and Dev-swaps are an excellent opportunity to observe how other teams conduct their daily stand-ups, code reviews and more. When an individual returns to their respective team, they might be inspired to incorporate a development practice, or share a skill that they learned. Improve</description>
      </item>
      <item>
         <title>Roadmap for roadmaps</title>
         <link>https://www.broadcom.com/sw-tech-blogs/clarity/roadmap-for-roadmaps</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/clarity/roadmap-for-roadmaps</guid>
         <pubDate>May 3, 2019</pubDate>
         <description>Clarity PPM has innovative roadmap functionality – a way to map projects to the company's long-term strategy for digital transformation, new products, and sometime the organization itself. But as delivery windows get ever shorter, are roadmaps still relevant?

“Yes, they are more relevant today than ever,&quot; said Linda Chase, a PPM product expert at Broadcom. &quot;Organizations are realizing the traditional approach to investment planning - collecting and prioritizing demand, merging new work with in-flight projects and sorting arbitrarily - is not working.”

Chase advocates shifting to top-down planning – aligning the work that gets approved with the roadmaps for how products, services and the company need to grow. She points out that this approach avoids the organizational fatigue that comes from producing and reviewing hundreds of business cases, many for projects that have no hope of delivering or that have no alignment with the organization’s goals. This approach often results in everything being a number one priority.

Planning must be fast, easy and effective. It should be integrated across the whole organization, prioritizing the organization's long-term vision. It must also allow the organization to pivot instantly with minimal disruption – and that simply can’t happen without that top-down focus.

So how do organizations make that shift?  Well it’s not difficult:


	Start with the high-level objectives – Chase calls them the “big rocks”
	Involve key stakeholders to capture their strategies
	Connect current work with the long-term vision
	Place emphasis on growth and broaden the realm of possibilities


Of course you need accurate, timely and complete information to do that, and that’s where Clarity PPM with its roadmap functionality comes handy.

Here's an ebook on working with strategic roadmaps.
</description>
      </item>
      <item>
         <title>Is gamification right for PPM?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/clarity/is-gamification-right-for-ppm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/clarity/is-gamification-right-for-ppm</guid>
         <pubDate>March 7, 2019</pubDate>
         <description>Let's play.

If you could implement a simple solution to make 87 percent of employees feel more productive, 84 percent more engaged and 82 percent happier, would you do it? Of course you would, and that’s exactly what gamification delivers, according to a recent survey by TalentLMS.

That survey was by a software company in the learning-and-development industry, and those are two areas – software and learning – where gamification is seen as having a big impact. But gamification can be leveraged to improve many different aspects of all businesses, this blog has some great strategies to consider.

Teams are becoming ever more critical to organizational success today – more and more money is focused on transformation and change, as the speed of evolution for all industries continues to accelerate.

To deliver that change, there's a shift from projects to products, replacing scheduled periodic releases with an ongoing stream of functional enhancements.

That results in a shift to more permanent teams, where the need to build and maintain, cohesion, engagement and performance is more important than ever.

Gamification isn’t a solution to all of the challenges your teams face, but as part of an overall team development and engagement strategy it can be a tremendous asset.

The perception that it only works with millennials, or in certain corporate cultures is wrong. That same TalentLMS survey referenced above showed that 90 percent of employees over the age of 45 believed gamification would help them achieve better results.
</description>
      </item>
      <item>
         <title>Mainframe Customer Support: Always Listening, Always Improving</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/mainframe-customer-support-always-listening-always-improving</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/mainframe-customer-support-always-listening-always-improving</guid>
         <pubDate>December 6, 2019</pubDate>
         <description>I am writing a series of blogs regarding how our Mainframe Support team are handling cases and improving the customer experience. In my last blog, I discussed that our Mainframe Support team is listening carefully to you, our valued customers, so that we may better serve you. Recently I’ve had the opportunity to meet directly with many of you at Tech Exchange meetings and Mainframe Strategic Advisory Council (MSAC) events. I have heard that you are quite impressed with the knowledge and professionalism of our engineers and appreciate their efforts in resolving their cases quickly. I have also heard loud and clear that you are getting more and more interested in self-service. Consider this situation: It’s 5:00 a.m. and there is a problem with a report that my manager needs on her desk in three hours. What should I do? Welcome to the world of self-service. What follows is a brief look in what we are doing in this area. In July we introduced a new Support survey which increased focus on the mainframe support experience from the support-level transaction itself to your experience with our self-service channels prior to case creation. What has become apparent in these surveys is that self-service is now part of the support experience. You often use self-service in attempt to resolve your question or problem before even thinking about opening a Support case. To improve our self-service, we have implemented Knowledge Centered Service (KCS) across Mainframe Support. Knowledge Centered Service is a methodology which incorporates the use, validation, improvement and creation into the process of resolving a support case. As part of KCS, our Support Engineers start to search, review, update, or create knowledge when they begin working a case. This way knowledge becomes corrected and created on a just-in-time basis – based on</description>
      </item>
      <item>
         <title>15 Cool Things About Clarity PPM</title>
         <link>https://www.broadcom.com/sw-tech-blogs/clarity/15-cool-things-about-clarity-ppm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/clarity/15-cool-things-about-clarity-ppm</guid>
         <pubDate>November 7, 2019</pubDate>
         <description>Make sure you are getting the most out of Clarity PPM when you drive your company's digital transformation. Here are the fifteen cool things about the project portfolio solution: 1. A modern, social user experience Each day, workers become more accustomed to living in an app economy where communication is easy and technology simplifies their lives. But enterprise tools—including many PPM solutions—haven’t kept up. They don’t simplify everyday tasks, don’t facilitate in-context communication and sometimes don’t even provide a way to see what people are working on without navigating through multiple screens. As a result, we redesigned Clarity PPM to be faster, easier and more intuitive. Everyday tasks are simpler, collaboration is enhanced, visibility is comprehensive and organizations’ most pressing issues can be resolved without having to export data. We’ve transformed Clarity PPM into an easy-to-use, single source of information for all types of projects, regardless of the manager, department or team. 2. Project blueprinting In our study, we found that teams were navigating through reams of information that had no relevance to them or the tasks they were trying to accomplish. The data IT needs to access is typically very different than that required by marketing or development teams. And too much “noise” clouds visibility, diminishes focus and impedes effective problem solving. To address this, Clarity PPM introduced a partitioning feature we call “blueprinting.” Blueprints are team-specific pages that are populated by each team’s custom fields and nothing more. A team starts with a standard blueprint onto which members drag and drop their visuals, documents and custom attributes. New blueprints can be created in minutes, and any changes are pushed to member screens automatically. 3. Familiar financials The ability to easily access, view and modify project data is important to ensuring money is well spent and projects are executed</description>
      </item>
      <item>
         <title>A New Era of Mainframe Software Management</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/a-new-era-of-mainframe-software-management</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/a-new-era-of-mainframe-software-management</guid>
         <pubDate>December 10, 2019</pubDate>
         <description>A new standard is coming to the area of mainframe software management. I wanted to write this article to shed more light on what changes are coming, why are they needed and how Broadcom has been participating in innovating the mainframe and making it more accessible for the new generation of Systems Programmers. This article is intended for all mainframe customers managing software from multiple vendors as I will describe how our solution enables you to use a single tool and a single process for managing all your mainframe software. It’s time for a change Mainframe software management can be a complex and time-consuming process. It often requires expertise in multiple methodologies and knowledge of different tools for each vendor. Several years ago, CA Technologies initiated a significant effort to improve installability and maintenance for its products. In 2009 CA created the CA Mainframe Software Manager (also known as CSM and MSM), a browser-based installation and maintenance tool that enables “everything” to be packaged in standard ways (SMP/E) and eliminated many installation tasks. MSM can also support other vendors. The new z/OS platform installation strategy In 2015 IBM, CA Technologies, and other major mainframe vendors started working on the new z/OS platform installation strategy. It aimed to converge on a single, consistent way to acquire, install, maintain and configure software on the z/OS platform. This allows everyone to benefit from the simplified dialog-driven PTF installation for SMP/E service in z/OSMF Software Management. What is z/OSMF? IBM z/OS Management Facility (z/OSMF) is a web server for z/OS management applications. User interface through a browser. Security is implemented using standard z/OS SAF-based authorization. Provides integrated applications (tasks) with the capability to add additional plug-ins. Provides public RESTful services for z/OSMF and z/OS resources. z/OSMF is a base element of z/OS since z/OS</description>
      </item>
      <item>
         <title>Q&amp;A: What's new in Clarity PPM?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/q-a-what-s-new-in-clarity-ppm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/q-a-what-s-new-in-clarity-ppm</guid>
         <pubDate>November 14, 2019</pubDate>
         <description>In a recent interview, Clarity PPM product manager David Sprague talked about the latest features in version 15.7. Here's an excerpt: Q: What are the biggest improvements in Clarity PPM 15.7? A: Let me break it down into four buckets. In the first bucket, we have new modern tools for different personas, like project managers, portfolio managers or team members. Our task hierarchy timeline, for example, gives you that visual way of managing tasks that customers really like. We’re bringing in boards, which are very intuitive and easy ways to manage investments and work across a lifecycle. We’re also bringing in additional capabilities around idea management and custom investment types, such as cost plans that allow people to manage their costs for an idea. The fourth bucket would be filled with customer requests to help them adopt the modern UX. There's something for everyone. Q: Why should a customer upgrade to Clarity PPM 15.7? A: With this release we have brought over some of the key capabilities from the old UX, which allow many of our customers to make the switch to the new experience. There’s a big telecommunication company in Europe, for example, that was looking to combine ideas and cost plans. Voila, there it is in the new release. That’s one of many examples. Q: How does Clarity PPM 15.7 compare to the competition? A: Talking to industry analysts, Clarity PPM really is the only project portfolio management solution that’s innovating today. We’re continuously introducing modern ways of working that are simple, usable and powerful. Many apps out there have features we developed years ago. They are not thinking about it from a holistic solution standpoint. As they acquire more technologies, those applications remain independent, in most cases, leaving customers with a disjointed experience. Clarity PPM is one</description>
      </item>
      <item>
         <title>Digital Product Management: Three words that change your world</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/digital-product-management-three-words-that-change-your-world</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/digital-product-management-three-words-that-change-your-world</guid>
         <pubDate>December 12, 2019</pubDate>
         <description>Well that headline got your attention, didn’t it! But seriously, we are convinced digital product management, or DPM, is going to change how you create value for your business. It will define your business investment strategy, and how you transition from a traditional project-based approach to a modern product-centric business model. In this blog series, we’re going to introduce you to the four steps to DPM and highlight how Broadcom’s solution is designed to support your journey. We’ve been investing in Clarity and Rally Software for several years, building an integrated platform that truly delivers – and not one that simply claims to support the latest trends. But why DPM? Companies are changing how they look at investments. In a digitally enabled world, where change is happening ever more rapidly, you can’t simply invest in a project that could be obsolete before it’s even done. You need to ensure the product you build both aligns with business strategy and provides the most value to your customers. This can no longer be managed in siloed projects. Instead, you need to pivot to investing in sustainable assets, like platforms and products. Those are your inventory of digital products and the way you manage those products, how you leverage them and the investments you make in them will define how successful you are as a digitally enabled business – hence the importance of DPM. Think about Apple’s iPhone, for example. That’s a digital product that needs to be managed as one sustained asset and not many different projects. At the heart of this is the concept of the product. Don’t think of that in legacy terms, in a DPM world a product is simply: A sustained asset with no predetermined life span Something delivering value that can be articulated in business terms Something</description>
      </item>
      <item>
         <title>An Exclusive Look Into Rally’s Agile Executive Workshop</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/an-exclusive-look-into-rally-s-agile-executive-workshop</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/an-exclusive-look-into-rally-s-agile-executive-workshop</guid>
         <pubDate>December 16, 2019</pubDate>
         <description>At Rally, we have always been very transparent about our Big Room Planning (BRP) ceremonies and have even posted blogs about How Rally does Planning and Top 3 Reasons to do Big Room Planning. What you might not know is that as a precursor to each BRP, we host an invite-only workshop for executive-level guests who are driving Agile transformation in their organizations. What is the Agile Executive Workshop? The workshop is a one-day event where select guests are invited to Broadcom’s Broomfield, CO office, where most of the Rally R&amp;D organization is co-located. The workshop is led by our team of Executive Advisors (our in-house business agility experts), accompanied by other leaders from product management and engineering. This is a terrific opportunity for attendees to network with executive-level peers and interact with our senior leaders who have years of experience aiding Agile transformations in various industries, and in organizations of all sizes. Attendees participate in a guided, deep-dive discussion on topics including (but not limited to) the following: Data/Metrics Continuous Planning Organizational Changes Agile Funding Models Agile Planning Portfolio Management Tools and Platforms The goal of this workshop is to help attendees address roadblocks that they’re facing in their Agile journey, and to help them develop actionable ideas for scaling Agile in their organizations. It can be very eye-opening for attendees to hear how peers overcame similar challenges as well. A recent Executive Workshop attendee shared the following feedback with us: “I enjoyed the discussions, I had “ah-ha” moments and developed other ways to approach my transformation.” What happens next? As part of the Agile Executive Workshop experience, these guests are also invited to attend and observe Rally’s Big Room Planning ceremony the following day. This is Rally’s real product planning session focused on giving clarity and alignment on</description>
      </item>
      <item>
         <title>Unlocking the Value of the SRE Model</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/unlocking-the-value-of-the-sre-model</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/unlocking-the-value-of-the-sre-model</guid>
         <pubDate>November 15, 2019</pubDate>
         <description>This paper examines the key tool requirements that are integral to supporting SRE models, and it reveals how Broadcom offers a unique approach that helps organizations realize more value from the SRE model and do so more rapidly and securely.

Review this paper and discover how Broadcom solutions deliver complete ecosystem observability and AI-fueled intelligence, enabling teams to optimize the customer experience and boost business outcomes.

Read White Paper
</description>
      </item>
      <item>
         <title>Modern PMO focuses on outcomes</title>
         <link>https://www.broadcom.com/sw-tech-blogs/clarity/modern-pmo-focuses-on-outcomes</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/clarity/modern-pmo-focuses-on-outcomes</guid>
         <pubDate>May 30, 2019</pubDate>
         <description>The project management office has evolved from providing strategic investment guidance, managing budgets and monitoring high-level execution, to a much more tactical role focused on waterfall execution and Gantt charts. The trend, unfortunately, didn’t pan out for most organizations. Today, the PMO is transitioning back into a strategic role focused on portfolios over individual projects, and identifying the right initiatives at the right time, executed by the right teams. But the PMO isn’t moving away from tactical execution entirely. It’s simply expanding its scope and shifting its main focus to business results. This, of course, makes the ability to view the business at ground level as well as from 35,000 feet essential. Only from this dual vantage point can the PMO help implement investment controls that tie project execution and delivery to budgetary constraints, governance and an outcome that brings value to the portfolio and the organization. For this degree of visibility, PMOs must have the right tools. That’s why the integration between Clarity PPM and Rally Software is proving invaluable to customers. Rally allows PMOs to monitor—and to a degree orchestrate—work happening at the project level. Rally also shares real-time information with Clarity PPM where it’s combined with pertinent financial information to provide the intelligence necessary to make strategic, data-driven decisions. The strategic PMO starts with results The right tools are essential in supporting the strategic responsibilities of the PMO. But those tools are a lot more effective for the PMO that already has the right mind-set and the right approach: the PMO that starts with the desired results and works backwards, mapping out how the company will achieve them. Following are some tips for turning the PMO into a strategic powerhouse: Define successful outcomes: Start by defining a specific business goal. Break it down into supporting goals at</description>
      </item>
      <item>
         <title>Lean Portfolio Management</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/lean-portfolio-management-intro</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/lean-portfolio-management-intro</guid>
         <pubDate>January 15, 2020</pubDate>
         <description>If you do a quick search for Lean Portfolio Management, you’ll find that nearly every software vendor or consultant has a different definition. These days, it seems like companies are working to commoditize Lean Portfolio Management (LPM) just like they commoditized the term Agile back in the early aughts. What these LPM trends all boil down to is actually pretty simple. It’s all about using the best method for your company to gain the most efficiencies across the entire organization. Said differently, Lean principles are about removing waste and bottlenecks across the enterprise in order to deliver value as quickly as possible to customers. LPM focuses in on all the parts of Portfolio Management and helps us apply Lean principles and practices to them. The truth is, the Lean-Agile mindset has been around Rally Software as early as 2004. In fact, we used to show the image below to help engineering teams understand how they needed to change their mindset around the way they work. As it turns out, it’s now one of the fundamentals of Lean Portfolio Management. So what happens as organizations apply these Lean principles only to IT? Oftentimes, their good intentions have unexpected consequences. Thermodynamics and Agile Teams What happens to your cup of hot tea or coffee when you stop heating it? It gets cold. More specifically, the molecules begin to bump up against the side of the mug, where they begin to slow down and cool as a result. In this example, your agile teams are the molecules. They’re the ones bumping up against the non-agile processes in your organization (i.e. the mug, or enterprise guardrails). As soon as your organization stops applying “heat” in the form of Lean-Agile principles, training, and coaching, the teams will eventually encounter organizational friction that slows them down,</description>
      </item>
      <item>
         <title>Collaboration Over Competition in an Agile Environment</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/collaboration-over-competition-in-an-agile-environment</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/collaboration-over-competition-in-an-agile-environment</guid>
         <pubDate>January 16, 2020</pubDate>
         <description>Let's face it; corporate culture has a tendency to be one of competition, which creates tension and divides teams and peers from one another. When organizations are in the challenging pursuit of agile transformation, if teams, groups and stakeholders are working against each other, effective agile adoption is doomed to fail. A cohesive, collaborative culture matters in order to achieve success. What if we shifted the culture with persistence and creative practices, so that organizations are inspired and incentivized to work together instead? In this blog, I’ll explore some common challenges companies face when they embark on implementing Agile. I’ll also talk about some important issues that organizations can hope to see when choosing to embrace a culture of collaboration over competition in an agile workplace. Let’s Compete! A Learned Culture How many times have you heard managers say “we let teams figure it out or fight among themselves”? Corporate culture has been long known for being one of competition. It’s portrayed in movies and TV shows. It’s reflected in real life. It’s seen in organizations large, medium, and small. Why does this culture continue to thrive? It’s because it centers upon our natural tendency to compete and challenge. (For more information on this, please read the article: Why We Compete). We are taught at a young age that we want to win, even if that means that others get left behind or stifled along the way. If you apply an unhealthy competitive approach to a professional environment, especially one that is implementing Agile, it can greatly inhibit progress. “The competitive culture is hindering us in implementing Agile.” The Challenges with Agile In any Agile implementation, you are bound to encounter issues, snags and surprises. Here are some that I’ve seen in my experience with agile organizations: Often, the roles</description>
      </item>
      <item>
         <title>Everything you need to know about DPM</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/everything-you-need-to-know-about-dpm</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/everything-you-need-to-know-about-dpm</guid>
         <pubDate>February 6, 2020</pubDate>
         <description>What’s Digital Product Management anyway? After only 14 percent of their business transformation attempts succeeded, many companies said they wanted to try something other than the usual project management tools. Listening to their needs, we created a digital product management solution to help them move beyond projects to products. Instead of running a slew of unrelated one-off projects, you shift the focus to the digital products in your business. A digital product is something people like to use, like an airport check-in kiosk, HR benefits portal or online retail store. Underneath each digital product are underpinning assets – such as databases, applications and APIs – that you manage through maintenance and upgrades. By funding products instead of sporadic work or whole departments, you immediately gain a few business advantages: Investments are organized the way your business runs. People and money are mapped to clear business outcomes. Work is prioritized based on value, not gut feelings. Teams are empowered to plan and work freely. The business acts confidently when competitors disrupt and customers demand. Here are some new resources to get you started with digital product management: Digital Product Management Paper Great overview paper by Kurt Steinle, head of products for Clarity at Broadcom. Get definitions, examples and tips about digital product management. Projects-to-Products Blog Prolific blog by Brian Nathanson, product manager for Clarity at Broadcom. Takes you into every aspect of digital product management in quick reads about the modern product owner role, new innovation dilemmas and changing lifecycles. Four Steps to Digital Product Management eBook Follow the four steps to digital product management: Organize, map, prioritize and empower. Quick read ebook. Moving to Products Webcast Learn how to effectively manage business outcomes by transitioning from traditional project management to product-based funding and delivery in this 45-minute webcast. Digital Product</description>
      </item>
      <item>
         <title>Digital Product Management: Organize the way you work</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/digital-product-management-organize-the-way-you-work</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/digital-product-management-organize-the-way-you-work</guid>
         <pubDate>January 9, 2020</pubDate>
         <description>One of the biggest problems with investing in projects is that often those projects don’t reflect how your business actually operates. In fact, they are the exact opposite. Projects are designed as temporary pieces of work, which when completed, see the project team dispersed. The business outcomes are left to a completely separate team to operationalize, leverage and secure the benefits. And that’s neither effective nor efficient. With digital products, on the other hand, there’s recognition that the business is investing in a sustained asset, one that must be maintained, enhanced and transformed. Different products will experience that lifecycle on a different cadence. A key role in Digital Product Management (DPM) is the product manager. This is a business-focused individual responsible for determining how a product will grow and evolve. Product managers must understand the environment their products operate in and the demands customers and stakeholders put on them. Successful DPM requires an organization to make investments aligned with these product managers and their focus on a long-term, customer-pleasing strategy. In this first DPM step of Organize, we recommend that you enable product managers to: Define and manage the long-term plans for the products they own Demonstrate clear accountability to the stakeholders approving the funding, showing performance that generates an acceptable return with appropriate timelines and outcomes Show tangible performance of the benefit the products deliver to their customer But in a world where every product is unique, how can you do that using traditional project portfolio management software that assumes every investment is essentially the same thing? After all, that’s what most PPM tools do. The answer is you can’t, and with Clarity you don’t have to. Now you can simply set up multi-level hierarchies and configure your investments types to align with how you want to run your</description>
      </item>
      <item>
         <title>Wi-Fi 6E unlocks the full potential of Wi-Fi in the 6 GHz band</title>
         <link>https://www.broadcom.com/blog/wi-fi-6e-unlocks-the-full-potential-of-wi-fi-in-the-6-ghz-band</link>
         <guid>https://www.broadcom.com/blog/wi-fi-6e-unlocks-the-full-potential-of-wi-fi-in-the-6-ghz-band</guid>
         <pubDate>January 6, 2020</pubDate>
         <description>Building on strong 6 GHz Wi-Fi momentum, the Wi-Fi Alliance (WFA) has announced Wi-Fi 6E which will extend the capabilities of the Wi-Fi 6 standard into the 6 GHz frequency band. With up to 1.2 GHz of new unlicensed spectrum potentially available, this is great news for consumers, who can now access even faster and more reliable Wi-Fi networks. Wi-Fi 6E’s brand name builds on the success of the WFA’s “generational naming approach” that has resonated with consumers worldwide. We applaud the WFA in differentiating this new version of Wi-Fi 6 – Wi-Fi 6E – which signals to customers a premium connectivity technology that meets the growing demand for high-performance wireless experiences. Wi-Fi 6E can access up to seven new 160 MHz-wide channels to deliver next-generation wireless connectivity solutions that provide faster speeds, higher capacity and lower latency. Because the new 6 GHz unlicensed band is adjacent to the existing 5 GHz Wi-Fi spectrum, vendors can add 6 GHz capability to their devices with minimal cost to enable new capabilities and applications. Wi-Fi 6E uses greenfield 6 GHz spectrum to enable high-quality Wi-Fi Networks by avoiding congestion present in current unlicensed bands. A study commissioned by WifiForward found that the economic value of unlicensed Wi-Fi spectrum has increased 129 percent since 2013, bringing an estimated $525 billion in economic surplus value in 2017 and expects to add $833 billion by 2020. The study also projected that if no additional unlicensed spectrum is made available to address new user needs, this projected that value would diminish, and there would be negative effects on the efficacy of the services we all depend on every day, such as Wi-Fi. Hence, the regulators from the U.S. and the EU are leaning toward making more unlicensed spectrum available in 2020, and Wi-Fi 6E is uniquely</description>
      </item>
      <item>
         <title>The Importance of Customer Empathy</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/the-importance-of-customer-empathy</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/the-importance-of-customer-empathy</guid>
         <pubDate>January 15, 2020</pubDate>
         <description>Knowing and understanding customers and their needs is the key to providing them with quality service. Several years ago, when we started our journey to improve customer loyalty and satisfaction, we met with many of our customers and asked what they wanted from us. How could we better serve them? Customers told us that they wanted us to go beyond solving questions and problems and become their trusted partner, to know them and their environment, and understand how our products affect their business. We took this input and began to reshape the way we provide technical support to our customers. It started out with a simple, thought: “Know thy customer.” But, how can we do this beyond the transaction in the form of a case or ticket? How can we get to know a customer well if he or she only contacts us five or six times a year? It then struck us -- it's all about empathy. Empathy is defined as the ability to understand and share the feelings of another. We began developing key behaviors associated with empathy and trained our engineers in demonstrating these behaviors. They are: · Maintaining engagement throughout the lifecycle of the case — ensuring that cases in the Support organization receive frequent updates and that we advocate for cases in Sustaining Engineering (L2). This way, the customer never has to wait too long for an update and/or resolution. We also set targets on not only fast initial responses (SLOs) but also on resolution times. · Researching and confirming the environment through reviewing engagement history and identifying the history to the customer — This involves reviewing the last several cases from the customer site to ensure understanding of questions or problems being asked or experienced. · Communicating as a trusted advisor and partner committed</description>
      </item>
      <item>
         <title>Digital Product Management: Map your business outcomes</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/digital-product-management-map-your-business-outcomes</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/digital-product-management-map-your-business-outcomes</guid>
         <pubDate>January 16, 2020</pubDate>
         <description>In the last blog entry, we looked at the first step to successful Digital Product Management (DPM), organizing your investments and products in a way that makes sense for your business. Once you have that foundation in place you need to begin building an integrated picture – to show how each of those products is delivering significant value and is aligned with the company strategy, marching together towards achieving the long-term vision. This is the process of defining the strategic roadmap for your organization and for each of the products within it. With management focused on digital products, this roadmapping exercise becomes more critical than ever. Each product needs to demonstrate its own product strategy, its contribution to the business strategy and its ongoing customer impact. Product managers must demonstrate a clear path that delivers on their strategy, and that supports their customers. But they must also show a clear path of contribution to the stakeholders’ strategy – to justify the investments being made. This is a highly fluid environment. The relationship between business and product strategy is constantly evolving, customer needs are shifting, and the relative importance of digital products changes as a result of a myriad of factors. In this environment, you need to be able to map and remap your strategy easily, quickly and collaboratively. That’s where Clarity excels and so many other solutions fail. We allow you to update product strategies with drag-and-drop speed, using only the data you need. This might be managed with sticky notes today. But unlike sticky notes, we connect plans with teams, tasks and budgets, giving you a trusted 360-degree view of all the investments in play. With clear roadmaps finance can develop a high-level capital plan. Senior executives are able to validate alignment with business strategy. And, unit managers can</description>
      </item>
      <item>
         <title>Ways to Improve Your Company’s Culture to Ensure Agile Success</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/ways-to-improve-your-company-s-culture-to-ensure-agile-success</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/ways-to-improve-your-company-s-culture-to-ensure-agile-success</guid>
         <pubDate>December 16, 2019</pubDate>
         <description>The pace of society has sped up. Social media feeds provide us with a consistent supply of content and news. Amazon delivers packages to us within a day, and we expect companies to quickly adapt to our rapidly changing needs and requirements. This new pace of change is inherent in large organizations. Many companies rely on the waterfall processes that they’ve had in place for decades. While those processes may have made them successful back then, relying on outdated approaches can severely hinder them from staying competitive in the evolving market. As a result, an increasing number of organizations are adopting agile development practices and frameworks to better respond to market changes in a reliable, scalable way. However, adopting agile doesn’t happen overnight—it requires a sound organizational culture that is ready for such a change. The Rally team surveyed a group of agile business leaders across various industries. We wanted to get their perspective on how work culture plays a role in successful agile implementation. We asked them to provide specific elements of culture that large organizations should pay close attention to, in order to achieve agile success. What Do We Mean By Culture? Culture, in itself, is a vague term, and everyone has different interpretations of what culture means in a workplace. The Managing Partner of a consulting agency said that culture should incorporate value-based thinking. Not in the sense that we should adhere to the values that come from the top, but more in the sense that we should value the thoughts and opinions of people within the organization. However, this can only be done if the organization has a strong culture of inclusion—with people, between teams, and across different departments and job functions. The importance of inclusivity and value-based thinking should be clearly articulated and reinforced during</description>
      </item>
      <item>
         <title>Software Capitalization and Agile - Implementation</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/software-capitalization-and-agile-implementation</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/software-capitalization-and-agile-implementation</guid>
         <pubDate>January 20, 2020</pubDate>
         <description>In the first two parts of this series, we talked about the problem that exists with software capitalization and agile software development methods, and how that problem can be solved. Now, let’s talk about how the solution is implemented. As we covered previously, once you’ve established clear bright lines between the Preliminary, Development, and Post Implementation phases, you can start to track points — either Feature or Story points — to determine how much effort can be capitalized. But how exactly does that work? It’s important to remember that as your organization decides how it is going to implement points tracking for software capitalization, the corporate finance and auditing teams should be involved in every step. It should always be a collaborative endeavor between IT and these groups; customers who have followed this approach find that it makes things much easier. There are several decisions and actions that need to be taken to successfully implement agile software capitalization. Those decisions are discussed in detail in the following paragraphs. Determine the level at which to track points The first decision to be made is at what level points are to be tracked — Feature or User Story. Generally, it is easier for most organizations to start by tracking Story points, because Stories are granular enough that marking the type of work being performed is usually straightforward. Remember, it’s the nature of costs, not their timing, that’s key. It’s the type of activity that matters and that is what you need to track. When making this decision, keep in mind that you will need a consistent, defensible method of estimating points for all work items of the type you select. Many organizations have teams that do not estimate points, so that practice may need to be adjusted. Almost always, the benefits (no</description>
      </item>
      <item>
         <title>Software Capitalization and Agile - The Solution</title>
         <link>https://www.broadcom.com/rally/software-capitalization-and-agile-the-solution</link>
         <guid>https://www.broadcom.com/rally/software-capitalization-and-agile-the-solution</guid>
         <pubDate>January 9, 2020</pubDate>
         <description>In the first part of this series, we talked about the problem that exists with software capitalization and agile software development methods. Now, let’s talk about the solution. The problem, stated simply, is that accounting methods for software capitalization align very closely to waterfall software development methodologies. When the accounting rules were developed, waterfall was the method du jour, so it made sense for the new rules to align with those models. But as agile methods have become more popular, development managers and finance teams have struggled to translate those old rules to the new, agile ways of working. One point that is worth noting is that the finance team is not the enemy here. Finance simply needs a way to determine whether software development expenditures should be expensed or capitalized and meet the generally accepted accounting principles (GAAP) of objectivity, materiality, consistency, and conservatism. It is critical that IT managers work closely with their finance partners as they seek solutions that will work well for agile development. In our experience at Rally Software, we've found that partnering early and having an honest, open dialogue about proposed changes leads to a positive and productive outcome. Now, back to the accounting rules around software capitalization. The key cause for confusion is that development phases of the waterfall methodology seem to map exactly to the guidelines. But a closer look shows this is not really a correct interpretation of the guidelines. Rather than six waterfall phases, there are actually only three that the rules care about — Preliminary (feasibility), Development, and Post-Implementation. Almost all activities in the Preliminary and Post-Implementation phases are expensed. Most (but not all) activities in the Development phase can be capitalized. Shifting focus to these three phases greatly simplifies our thinking around capitalization. There are two key gates</description>
      </item>
      <item>
         <title>Broadcom’s NIR SiPM technology sets new performance standards for LiDAR</title>
         <link>https://www.broadcom.com/blog/broadcoms-nir-sipm-technology-sets-performance-standards-for-lidar</link>
         <guid>https://www.broadcom.com/blog/broadcoms-nir-sipm-technology-sets-performance-standards-for-lidar</guid>
         <pubDate>February 3, 2020</pubDate>
         <description>LiDAR systems will be essential in the future in order to support SAE level 3 and above in automotive applications. Together with Radar and CMOS image sensors, they will enable advanced and autonomous driving. Some of the challenges for future LiDAR systems are large field-of-views in combination with a high resolution and reliable target detection for ranges up to 250m and beyond. This means even small objects with low reflectivity have to be identified in far distances even at bright sunlight. One of the keys to meet this requirement is a high Photon Detection Efficiency (PDE) and a high dynamic range in order to reach these targets. Since automotive applications request very competitive prices Broadcom has chosen silicon detectors which are ideal for 905nm wavelength. As a consequence, Broadcom Industrial Fiber Products Division (IFPD) has broadened its portfolio of innovative optical sensors including new silicon photomultiplier (SiPM) devices for automotive and industrial LiDAR applications. Broadcom’s latest near infrared (NIR) SiPM solutions address various challenges, such as range limitations and multi-target resolution. The underlying NIR SiPM technology delivers unprecedented performance by combining a high photon detection efficiency (PDE) of 18 percent at 905 nm with a recharge time of 10 ns. High dynamic range is achieved with the smallest cell size, while a low dark count rate (DCR), low crosstalk and after-pulsing probability make Broadcom’s NIR SiPM an ideal detector for high performing LiDAR applications. NIR SiPM highlights PDE at 905 nm: 18% Recharge time constant: &lt; 10 ns Single photon time resolution: 500 ps Smallest cell size DCR: 600 kHz/mm2 Direct crosstalk: &lt; 20% Samples available “Broadcom IFPD successfully released NUV-HD products to the market two years ago. Now that our key SiPM milestones have been reached, we are excited to announce our new cutting-edge NIR solutions for LiDAR applications.</description>
      </item>
      <item>
         <title>Is There a Need for Manual Testers in Our Agile Teams?</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/is-there-a-need-for-manual-testers-in-our-agile-teams</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/is-there-a-need-for-manual-testers-in-our-agile-teams</guid>
         <pubDate>February 3, 2020</pubDate>
         <description>This is a scene we have seen in many Agile projects, more than just a few isolated incidents. A development team decides to adopt some sort of Agile approach and they start by reading articles and books. They start implementing what they have read, defining User Stories, setting up Stand Up meetings, organizing their work on boards that implement some sort of WIP limit, etc. Companies and teams decide to adopt the parts or approaches that sound reasonable, while leaving out some of the things that seem “crazy” or counterproductive. After one or two sprints, the teams come to realize that up to now they only changed what the developers were doing, and the testers are trying to catch up, because they are working more or less in the same way they had before. Testing is still done near the end of the release, in a time-boxed event, and it’s creating a bottleneck in the Agile process. The solution is obvious. Let’s automate! Automation will be carried out by the Developers, as part of the user stories or even as independent stories on their own. Testers step back and maybe do some light regression testing and UAT at the end of the cycle, so the testing phase Is not holding up the process. The organization begins to question - what is the role of traditional testers as we become more agile? Time passes and we start releasing the features we have been working on. That is when we start receiving all the bugs that come back from the field. Important bugs that were not found by our developers in their testing or by the limited automation we have. Some of them are extreme cases, many of them come from integrations, or users not working based on the “expected scenarios”. Regardless</description>
      </item>
      <item>
         <title>Digital Product Management: Prioritize work based on business value</title>
         <link>https://www.broadcom.com/sw-tech-blogs/ppm/digital-product-management-prioritize-work-based-on-business-value</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/ppm/digital-product-management-prioritize-work-based-on-business-value</guid>
         <pubDate>February 13, 2020</pubDate>
         <description>There’s one aspect of traditional business that most organizations are universally bad at: prioritization. The problem is two-fold. Prioritization is independent of any real understanding of how choices align with strategy or contribute to the business’ success. Prioritization fails to recognize the colossal amount – sometimes up to 95% – of funding that has already been allocated. In an effective digital product management environment, the prioritization process is fueled by your intuitive, comprehensive roadmaps. Using roadmaps, you can immediately see how investment choices support business and product strategies. As a result, you’ll make better-informed decisions. More significantly, you can review the roadmap items that contain carryover – work from previous periods that isn’t complete and will need funding from the current period. You will also gain insight into the required investments in the compliance, maintenance, and security areas. Now you have a much clearer picture of the size of your available funds and a better understanding of the appropriate ways to invest those funds. Because you have organized your investments in a method that aligns with your business, you have transparency into the impact those investment decisions will have on each product ROI – by period, by product category, by investment type, or any other factor that makes sense to you. Clarity from Broadcom gives you this level of visibility without any compromise. You define how you organize investments, configure roadmaps, monitor and manage investment prioritization, and make funding allocation decisions. Leveraging Clarity, you get even more control and insight. You can identify unallocated investments that can potentially be invested in higher priority items. These enhanced futures enable our customers to increase innovation spend, decrease non-strategic costs, and make informed customer-centric tradeoffs and decisions. By combining this with stage gates, you can make those funding decisions more formal and encourage</description>
      </item>
      <item>
         <title>Moving the Broadcom AIOps Monitoring Platform from a Monolithic Architecture to a Modern Microservice Architecture</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/aiops-monitoring-platform-modern-microservice-architecture</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/aiops-monitoring-platform-modern-microservice-architecture</guid>
         <pubDate>February 7, 2020</pubDate>
         <description>Monolithic architectures tend to consist of one code base and components like a database, user interface, and server-side application, all contained in one unit and managed in one place. Although this type of structure is easy to manage at first, as companies grow and need to develop and deploy quickly, traditional monolithic architectures are becoming obsolete and the need for modern architectures is becoming vital.

For this reason, companies are starting to embrace modern microservice architectures. A microservice architecture splits the functionality of an application into independent pieces that speak to each other via APIs. Broadcom similarly turned the AIOps monitoring platform into a microservice architecture, breaking down components like metrics, trace processing, and alert management into separate containers. This brings a variety of different benefits to customers, including:


	Ease of Scaling: With monolithic architectures, it is difficult to scale quickly. If you do try to scale, you can only scale all the components as a whole, not independently. In a microservice architecture, components are split up in independent modules that can easily be scaled.
	Greater Feature Agility: It is easy to add different features to a microservice architecture since the code is separated into different containers and you just have to deploy that new component. In a monolithic architecture, even if you added a small feature, you would have to deploy everything.
	Lower Total Cost of Ownership: Faster and cheaper development that comes with a microservice architecture decreases the total cost of ownership.


To learn more about this architecture, join James Kao, Head of APM Product Management, as he walks through how Broadcom took the AIOps monitoring platform from a monolithic architecture to a modern microservice architecture in this new video. To learn more about our AIOps platform, visit our product page.
</description>
      </item>
      <item>
         <title>Introducing Rally Bot, Our Native GitHub Integration</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/introducing-rally-bot-our-native-github-integration</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/introducing-rally-bot-our-native-github-integration</guid>
         <pubDate>February 19, 2020</pubDate>
         <description>This week, we’re excited to announce the launch of Rally Bot, our new GitHub integration. Now generally available, Rally Bot integrates natively where your development teams spend most of their time, providing your organization with visibility into the status of work, while reducing context switching for developers. Here’s everything you need to know about our latest integration. What is Rally Bot? Rally Bot is a native integration that lets you connect Rally user stories or defects to pull requests in GitHub. For more information, check out our 1-minute overview of Rally Bot video. Please note that the Rally Bot integration is currently only compatible with GitHub SaaS and GitHub Enterprise Cloud. GitHub Enterprise (on-premises) is not currently supported. How it works Simply open a pull request in GitHub and reference the Rally user story or defect URL in the description. From there, Rally Bot automatically creates a connection in Rally for any linked artifacts. The title, Formatted ID, and description for the user story or defect are added to the pull request. In Rally, the connection is added to the user story or defect, making it easy to see the associated pull request with a direct link to view it in GitHub. What can Rally Bot do for my organization? By connecting stories and defects to pull requests in GitHub, Rally Bot provides you with visibility into the status of development work, and also provides requirement-to-code visibility for tracking and audit purposes. The reverse is also true — developers can now view story or defect information directly in context with their pull requests in GitHub through content references. This eliminates context switching between apps, which typically results in happier developers. How to get started Installation of the integration takes less than 60 seconds, with minimal maintenance required after installation. There</description>
      </item>
      <item>
         <title>How Modern AI and Machine Learning Techniques Can Provide Intelligent Automation to IT Operations Teams </title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/how-modern-ai-and-machine-learning-techniques-can-provide-intelligent-automation-to-it-operations-teams</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/how-modern-ai-and-machine-learning-techniques-can-provide-intelligent-automation-to-it-operations-teams</guid>
         <pubDate>February 26, 2020</pubDate>
         <description>When we look at the IT operations space, there are three key monitoring domains. There is the application domain, which includes user monitoring and digital experience monitoring. There is the infrastructure domain, which includes monitoring servers, cloud assets, and virtual machines. Lastly, there is the network domain which covers LANs, WANs, wireless networks, and their software-defined radians.

Traditionally speaking, these domains work in silos, which means they are managed by different teams, each having complete authority over which tools are used to monitor these domains, how these tools are configured, and what policies are set. In order to be effective, these teams will also implement some sort of automation, and may individually be able to claim they are highly automated as they quickly identify and remediate issues.

However, in this situation, these silos lack visibility into what other teams are doing, leading to a common effect called Islands of Automation. So although individually speaking it may seem like teams are automating their remediation and fixing problems locally, they might not be getting to the real root cause of the solution.

In this video, Adeesh Fulay, Director of AIOps Product Management in the Enterprise Software Division at Broadcom, walks through how AIOps from Broadcom® fixes this situation by providing automation across all domains. For more information on AIOps, check out our product page.
</description>
      </item>
      <item>
         <title>Broadcom’s AIOps Architecture and How It’s Different From Existing Solutions </title>
         <link>https://www.broadcom.com/broadcoms-aiops-architecture-and-how-its-different-from-existing-solutions</link>
         <guid>https://www.broadcom.com/broadcoms-aiops-architecture-and-how-its-different-from-existing-solutions</guid>
         <pubDate>February 26, 2020</pubDate>
         <description> 

For businesses today, the pressure on IT teams continues to mount. While under this pressure, teams are striving to track and manage service levels, and contending with the increasingly dynamic, hybrid, and distributed nature of their computing environments. To meet these demands, teams can use AIOps from Broadcom® to establish proactive, automated remediation capabilities that fuel superior user experiences, while offering fundamental breakthroughs in scale and efficiency.

Here are the five ways our solution differs from existing tools:


	Business Driven Service Analytics: View health and availability information for your different IT services.
	Full Stack Observability: Monitor your entire stack, from your infrastructure to your app to your network, with one tool.
	Data Ingestion:  Our AIOps solution can ingest both structured data like, alarms, metric, and topology, as well as unstructured data like logs, and traces.
	Machine Learning and AI: By leveraging machine learning and AI, our tool can use root cause analysis and predictive analytics to identify issues quickly and fix them before they affect your customer experience.
	Automated Remediation: Triage issues proactively with automated workflows, removing delays and errors associated with manual remediation efforts.


To learn more, watch this video with Sudip Datta, Head of AIOps and Monitoring for Broadcom’s Enterprise Software Division, where he walks through our key differentiators in detail. To learn more about our AIOps platform, visit our product page.
</description>
      </item>
      <item>
         <title>Terma Analytics is now Automic Automation Intelligence</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/terma-analytics-is-now-automic-automation-intelligence</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/terma-analytics-is-now-automic-automation-intelligence</guid>
         <pubDate>February 28, 2020</pubDate>
         <description>It has been three months since Terma Software became part of Broadcom and a lot has been happening. We have changed the product name, it is now called Automic® Automation Intelligence, and instead of a series of individual products we have packaged it all together to make it easier for customers to get the full benefits of the solution. We delivered the longer-term roadmap on the February 13th webinar where we presented the plans we now have to expand upon the underlying platform and discussed how we plan to accelerate our offerings now that we are happily part of the Broadcom team. If you were unable to attend, you are welcome to watch the recording here. We are also delivering the first new release as part of Broadcom which becomes generally available on February 28th. Not only does it have the new Automic Automation Intelligence branding, but it delivers important new capabilities and enhances many existing features. Here is an overview of what it contains. Application Landscape We all know how critical it is for companies to get ahead of the curve on delivering SLAs. Our solution is already excellent at doing that, but in our discussions with clients, it became apparent that there was room to do even better. SLAs are typically defined on a particular workflow to deliver that component within a certain timeframe, but IT Operations is delivering business services which typically encompass more than a single workflow, that is where Application Landscape comes in. For example, a retail company will be interested in Replenishment of Stock at Store. This would probably start with knowing when the Store End-Of-Day information has been sent to corporate. Later the replenishment workflows will be running within the ERP, so it is important to track components of that processing. Information is</description>
      </item>
      <item>
         <title>AI in PPM: Planning</title>
         <link>https://www.broadcom.com/sw-tech-blogs/clarity/ai-in-ppm-planning</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/clarity/ai-in-ppm-planning</guid>
         <pubDate>March 5, 2020</pubDate>
         <description>AI, it is just two letters. But if you believed the hype, those two letters will soon steal our jobs.

That hasn’t happened and it’s not going to.

But artificial intelligence and machine learning are finally a reality in a number of industry segments. Project Portfolio Management, or PPM, is one of the most exciting areas because of the importance of managing corporate investments effectively and because of the inherent uncertainty around projects. Nowhere is the benefit of AI going to be greater than in planning, and especially when it comes to ensuring the right projects are selected and allocated in the right way.

This is an element of planning that has long caused problems for organizations of all types. It relies too much on subjectivity – business cases based on high level guesstimates and approvals driven by personal priorities and pet projects. Well AI doesn’t play favorites!

When you allow AI to sit on top of the review and selection process you gain:


	Visibility into outlier estimates and forecasts, allowing flawed proposals to be identified
	Insight into the capacity and distribution of work to identify ‘hot spots’ and gaps
	Identification of capability gaps that risk derailing your most important initiatives


And most importantly you get an approach that doesn’t lose interest or get bored. AI focuses just as much on the hundredth proposal as it does on the first, and it goes back and validates its analysis every time new information becomes available. That way you not only know you are approving the right projects; you find out as soon as something changes and you need to respond.

If you don’t select the right projects, you’ll never achieve the right outcomes. So why not try supporting your planning process with AI?
</description>
      </item>
      <item>
         <title>Gaining Insights from Your Modern Application Environment</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/gaining-insights-from-your-modern-application-environment</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/gaining-insights-from-your-modern-application-environment</guid>
         <pubDate>March 9, 2020</pubDate>
         <description>As organizations start their digital transformation journeys, many have started moving their current services into containerized environments. However, these microservice environments have become highly distributed, dynamic, and complex as organizations start to add more digital services or IoT devices. Therefore, it has become crucial to gather health and performance data from these landscapes. In traditional monitoring, agents reside within environments themselves and pull server, mainframe, database, and network information at regular intervals. However, as we move into modern architecture environments, this approach really doesn't work. Microservices are built to be modular and very small, which makes it very difficult to put an agent within sight of this container since it will consume all of its resources. For that reason, developers are now starting to add observability into their services. They’re using libraries and APIs to push out the health and performance information of that particular service. With this agentless approach, observability information, as well as health and performance data from containers, can be aggregated across the entire infrastructure, providing a great view of backend services. In addition to the backend, it has become important to monitor how users are consuming services. When you combine that information with the back-end performance, you get a complete picture of the health of the services you're providing to your customers. As IT teams begin to build out these complex architectures, it has become difficult to manage the volume of data that comes with these environments. This becomes even more difficult if you're using disparate tools to collect the information. This is where AIOps from Broadcom® can come in to collect, aggregate, and analyze the information from the user, application, infrastructure, and network, providing a complete end-to-end view across your entire digital supply chain. To learn more about this solution and how it can help</description>
      </item>
      <item>
         <title>The Importance of Expertise in Customer Support </title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/the-importance-of-expertise-in-customer-support</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/the-importance-of-expertise-in-customer-support</guid>
         <pubDate>March 11, 2020</pubDate>
         <description>“An investment in knowledge pays the best interest.” – Benjamin Franklin. Over the years our Customer Support survey results indicated that the knowledge of our Support Engineers was of extreme importance to our customers. Customers were often quick to point out that they appreciated the knowledge that an engineer provided regarding our product or technology but were also not shy to identify any weakness with respect to knowledge. Knowledge and expertise are also the keys to fast turnaround to cases and overall customer satisfaction. That is, the more knowledge our Support Engineers have about the product, environment(s), industry and your business, the faster they can resolve your cases. We ask that all of our engineers commit to a growth mindset and continually build expertise to stay relevant. We have implemented the following to ensure expertise has been maintained within Mainframe Customer Support: Experienced Staff – All Mainframe Customer Support engineers were retained throughout the Broadcom acquisition. We even brought some Support Engineers who left the mainframe to work on distributed products back to mainframe. But, we’ve also expanded our staff to ensure that we are not only poised for the future but also have more team diversity. New hires require a bachelor’s degree in Computer Science and must demonstrate that they have the soft skills (communication, problem-solving, teamwork) to perform well in a technical support role. Once hired, a new Support Engineer will attend an 8-week mainframe boot camp where they receive not only training on z/OS, ISPF, TSO, JCL, Assembler, and our CA Technologies products, but also training in soft skills such as communication and public speaking. To ensure proficiency and team building, the boot camp is chock-full of labs and team projects. As such, even our new hires have quite a bit of expertise the day they join</description>
      </item>
      <item>
         <title>Remote PI Planning with Rally</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/remote-pi-planning-with-rally</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/remote-pi-planning-with-rally</guid>
         <pubDate>March 13, 2020</pubDate>
         <description>Those of us who embrace the Agile Manifesto certainly value individual and interactions over processes and tools and believe that the most efficient and effective method of conveying information to and within a development team is face-to-face conversation. But what happens when circumstances beyond our control prevent face-to-face conversation and collaboration? We often speak with companies and teams that struggle to collaborate effectively when doing SAFe® Program Increment (PI) Planning or Release Planning. Companies are increasingly geographically distributed, individual team members’ personal commitments raise barriers to travel, and the economics of co-locating an entire Release Train or Program for planning can present challenges. Luckily, good telepresence and collaboration technologies can fill part of the gap, and smart use of Agile management tools like Rally software can make a real difference. Whether you’re conducting Big Room Planning (BRP) in person or virtually, here are tips to help you to collaborate more effectively and get the most out of PI/Release Planning. Setting the Stage for Effective Remote Collaboration People Everybody participates! Do whatever it takes to remove barriers to participation and check in regularly to ensure that people are engaged and have what they need to be productive. Working Agreements Make sure that you set expectations about respect for individuals, minimum levels of participation, timing of the event, and breaks. Be considerate of team members’ geographic locations when scheduling and keep everyone on the same break schedule. Use video conferencing and make a “cameras on” working agreement. People will forgive pajamas and bad hair days! Pay attention to non-verbal communication. During the Big Room Planning (BRP) event, make sure you have a “respect the virtual box” area where those that are co-located can stand and everyone virtual can clearly hear and see them (microphones are a good thing). There are lots of</description>
      </item>
      <item>
         <title>Bug Bashing 101 and 3 Key Benefits</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/bug-bashing-101-and-3-key-benefits</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/bug-bashing-101-and-3-key-benefits</guid>
         <pubDate>March 16, 2020</pubDate>
         <description>At Rally, we pride ourselves on having a fun, inclusive and supportive work culture, which is evident through our monthly birthday celebrations, to Halloween chili cookoffs, to our rotation and developer-swaps, and much more! This blog is about a different kind of gathering—Bug Bashing, and while it isn't a party per se, it's still a collaborative opportunity that has been an integral part of the Rally R&amp;D culture for years. Bug Bashing is a procedure in which agile development teams thoroughly search their product in a test environment to seek out bugs or software regressions. The detected bugs then get fixed before code is released to customers. Rally’s Evolution of Bug Bashing Historically, the Rally development team held a recurring 1-hour Bug Bash during every release. We’ve experimented with the lengths of our releases over time, but as an example, when we conducted 6-week releases, we would Bug Bash for 1-hour during that time period. Everyone across the development organization (engineers, user learning, product management, support, QA, scrum masters, user experience) was encouraged to participate in this process, with one common goal in mind: to catch bugs. Since 2014, we have moved to a Continuous Deployment model where we release new code not just once every 2 weeks, but multiple times a day. The biggest difference between then and now is that we only perform Bug Bashes when there are wide-spanning changes to the code. This process allows us to deploy code (that is at lesser risk for bugs) more quickly, so that our customers don’t have to wait weeks for code changes, while we focus on Bug Bashing code that is at a greater risk for containing bugs and regressions. If we do happen to find a bug in one of the more minor code changes, we are able</description>
      </item>
      <item>
         <title>Digital Product Management: Empower your people!</title>
         <link>https://www.broadcom.com/sw-tech-blogs/clarity/digital-product-management-empower-your-people</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/clarity/digital-product-management-empower-your-people</guid>
         <pubDate>March 16, 2020</pubDate>
         <description>In this blog series, our digital product management steps have focused on strategic levels of the business – organizing the digital products in a way that makes sense for your business, mapping all products into a strategic roadmap, and prioritizing investment decisions effectively to optimize performance. Eventually, that strategy has to be converted into work, and that's where step four – empower – comes in. When you are managing your business as a series of digital products, you need to create an environment where the delivery of work against those products is as effective and efficient as possible, while continuously validating that work aligns with the strategy and priorities. To do that requires not just agile work practices, but agile teams that have the freedom and autonomy to adjust their work to ensure they remain aligned with the needs of their products. That becomes more difficult in a product-driven world because there aren't the natural endpoints that a project offers, where performance can be measured and validated. In an environment of continuous delivery, it is paramount that teams understand what they have to deliver, product managers validate the speed and progress of performance, and finance quantifies business outcomes. By combining Clarity and Rally Software into an industry-leading ValueOps solution for digital business and agile management, Broadcom lets your teams work the way they want, while still providing the measures and OKRs needed to monitor team performance. That includes time tracking, of course, but also story points, velocity, or any other measure of progress that makes sense to your business. Clarity also enables product managers to combine a stream of functional releases into a single version without the need for the artificial bundling of those features, which occurs with a traditional project. That release concept draws a line in the sand</description>
      </item>
      <item>
         <title>Why Workload Automation Intelligence Matters</title>
         <link>https://www.broadcom.com/sw-tech-blogs/automation/why-workload-automation-intelligence-matters</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/automation/why-workload-automation-intelligence-matters</guid>
         <pubDate>March 19, 2020</pubDate>
         <description>In today’s real-time, instant gratification business world it might surprise some, that underlying many of the systems and applications that provide this real-time reality are job scheduling and workload automation systems. These systems manage the critical feeds and processing to ensure that real-time applications support the business and its customers with accurate and relevant information. These critical workload processes support everything from financial market transactions to the placement and shipment of customer orders. Its mind boggling how complex and important these processes are, yet very few people are aware of the role these systems play until something fails, or doesn’t execute properly or on time. Workload Automation products and solutions come in many flavors from many vendors. Larger enterprises predominantly run the “Big 3” vendors’ products. Surprisingly, they don’t run just one vendor’s solution, but often multiple ones from different vendors as well as various specialty solutions from smaller vendors to meet the specific needs of certain applications. This adds to the complexity of these environments, and the costs associated with managing them as the business process spans multiple systems and therefore multiple scheduling tools. Although these workload solutions provide monitoring capabilities for the jobs that are in flight or scheduled they do not provide an analytical approach to managing, monitoring and optimizing the workload under their control based on the business processes that are being run. This is where Workload Analytics comes into play. Workload Analytics While virtually every area of your business and most areas of IT are embracing analytics, the area of workload automation does not. Some use historical run data to develop trends, however, this is only an after-the-fact Band-Aid that can tell you that you have already failed or that you might somewhere eventually fail. It cannot point out specific future problem areas or provide</description>
      </item>
      <item>
         <title>How Modern Network Monitoring Can Deliver a Healthy Customer Experience for Remote Workers</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/modern-network-monitoring-can-deliver-a-healthy-customer-experience-for-remote-workers</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/modern-network-monitoring-can-deliver-a-healthy-customer-experience-for-remote-workers</guid>
         <pubDate>March 18, 2020</pubDate>
         <description>During this current time of global stress, major companies like Twitter, Amazon, and Google are asking their employees to work remotely and restricting all non-essential business travel as a measure to keep their employees healthy. So what's the impact of all these new remote workers on IT teams and specifically the network? Let’s look at a very real and recent network monitoring use case from one of our customers. A large oil company in Asia recently implemented a policy where a large number of employees now have to work from home. How do you securely gain access to your company assets and resources when at home? A Virtual Private Network (VPN) is the most popular and secure way the industry gives access to company data when traveling or working remotely. Like most network devices that handle traffic, there are usually limits to the number of connections it can handle or is configured to handle at any given time. Therein lies the current problem with our oil company. They were now experiencing a large number of network connections problems to their VPN gateway. A significant number of users were now having issues connecting to the company network or just getting dropped completely. Users in China who were supposed to connect to the China VPN, hit the gateway's max number of connections allowed and then decided to try and connect to their Singapore VPN, which caused that gateway to also hit its maximum number of connections allowed. You can imagine the cascading effect this behavior had and the impact on users in different regions from being able to effectively work remotely while trying to adhere to new policies put in place due to current global events. Unfortunately, their operations team had little network monitoring visibility into how these VPN Gateways were performing</description>
      </item>
      <item>
         <title>Robust VCSEL-based optical wireless transceiver for short-range industrial communications </title>
         <link>https://www.broadcom.com/blog/vcsel-based-optical-wireless-transceiver-for-short-range-industrial-communications</link>
         <guid>https://www.broadcom.com/blog/vcsel-based-optical-wireless-transceiver-for-short-range-industrial-communications</guid>
         <pubDate>March 23, 2020</pubDate>
         <description>That our world is moving with increasing speed toward wireless solutions has its roots in consumer applications, such as wireless keyboards, optical mice and PC docking stations. Industrial solutions, although certainly attracted by the wire-free convenience, are more focused on reliability, repeatability, and the overall lifetime and maintenance of the system. Nonetheless, in some industrial applications wireless connectivity can add clear system benefits, whereas in others there's no way around them. RF-based short-range communication technologies are widely available on the market but do not focus on the needs of industrial requirements and end applications. These applications demand robust signal integrity and high system reliability combined with high bandwidth and real-time capabilities. For this reason, Broadcom is introducing a new platform of optical wireless communications, and the first product – AFBR-FS50B00 – is designed to address many of the requirements of industrial system integrators. The focus of this development was to solve many of the issues and technical problems contained in optical free-space communications. Toward that goal, Broadcom was able to develop a highly integrated, small footprint, single optics transceiver device capable of communicating full-duplex using a single wavelength. In its practical implementation, this means that the device houses both transmitter and receiver under one single radial symmetric lens shape. This special design allows the customer to maintain the data link integrity in a rotating system where the transceiver devices are aligned with the mechanical axis. As in the case for the lead application, this makes the AFBR-FS50B00 not only an obvious candidate for a rotating system but also for a docking application where a higher degree of freedom is needed in the alignment of the docked unit. The use cases, however, range from wire-free high speed interconnects and rotary feedthrough to diverse docking and through-glass applications. Many applications need to</description>
      </item>
      <item>
         <title>For the Mainframe, The Tide is Turning</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/for-the-mainframe-the-tide-is-turning</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/for-the-mainframe-the-tide-is-turning</guid>
         <pubDate>March 20, 2020</pubDate>
         <description>If you had asked an analyst just a few years ago what to do with your corporate IT and your IT architecture and infrastructure, nine out of ten would have told you to get rid of your mainframe. That is not the case anymore. So, what happened? First, time has passed and continue to pass, and the mainframe is still there. At some point, even the most rigid mind has to face reality. To do the same thing again and again and expect a different outcome is not taking you anywhere useful. Second, the reason why the mainframe exists is to perform the transactions that run your business, and those are in essence still the same, and continue to be executed under the same premises. There is no logical reason to rebuild that again somewhere else. Why? Well COBOL may not sound sexy and cool but it is efficient in handling business logic and more importantly, it is already written and tested. It may have taken twenty years to build and refine the engineering work around those mission-critical applications. It is naive to believe that a different team, just because they use a different programming language and execute on a different hardware, will build and refine the abstract representation of all the details of your business around your core applications in such a short amount of time. Too many organizations have paid an expensive tax just to learn that. Every action has two costs: the cost of doing it and the opportunity cost of not doing something else. Agility is quickly adapting to the new circumstances. To stop to rework what is already done well is sclerotic. A third reason is that mainframe is a moving target. It is not a passive platform but one in constant evolution. Moreover, mainframe</description>
      </item>
      <item>
         <title>Gaining Insights from Application to Mainframe with DX Dashboards</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/application-to-mainframe-dx-dashboards</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/application-to-mainframe-dx-dashboards</guid>
         <pubDate>March 26, 2020</pubDate>
         <description>This post will help you to learn how to gain insights into modern application environments by leveraging the value inside your data lake with DX Dashboards. The main goal of having a common data lake is to eliminate the “fragmented” view of the IT landscape where each domain manager has a view of what is happening within their domain (e.g. APM monitoring tools has no visibility about what is happening on the network domain). So how can AIOps and DX Dashboards help us to mitigate this siloed view? We start by ingesting structured and unstructured data from different sources into our AIOps platform from the monitoring domain tools. This Big Data layer is overlayed with an Analytic and Automation engine that applies AI/ML to produce Insights and trigger self-remediation. Finally, we arrive to the Visualization layer: An open component that enables the user to have full stack observability and take business driven decisions: DX Dashboards. DX Dashboards are the “face” of the 3 V’s of Big Data: Volume, Velocity and Variety of data. Let’s take a closer look at how to achieve this by presenting a powerful Dashboard that brings all this together. The following example will showcase how to take the user from Application to Mainframe for a specific Business Service, for instance “Digital Banking”. We start with a high-level overview of the health of this line of the business. The dashboard displays status of the different service components (e.g “Payments Application” or “Backend Banking” component which contains the Mainframe entities supporting this Service). The data displayed on this dashboard is calculated by our Analytical Engine from the raw data from our Data Lake. The inputs to the model are metrics, alarms, logs, anomalies and Topology. The outputs of our Service Analytics model are critical KPIs like Health or</description>
      </item>
      <item>
         <title>Five Principles of Virtual Agility</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/five-principles-of-virtual-agility</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/five-principles-of-virtual-agility</guid>
         <pubDate>March 27, 2020</pubDate>
         <description>Agile methods have never been friendly to the idea of distributed teams. Collaboration and face-to-face communication are core principles that date back to the Agile Manifesto, and there are well-respected agile consultants who have included ‘distributed teams’ in their list of agile failure modes. Despite these warnings, it is more common than not for companies to have distributed agile teams. Many of the companies we talk to have headquarters in North America, but development teams scattered around the globe. Even Rally Software — a leader in agile management software (and where we work) — had development teams located in Colorado and North Carolina for years. However, in almost all of these cases, the teams themselves are usually co-located, with all team members sharing an office location. Rarely are individual members of an agile team off on their own. Adapting agile practices Today, however, we find ourselves in a new reality. COVID-19 is having a profound impact on the way we work. Overnight, we’ve gone from advocating that agile teams are most productive when they work face-to-face, to the reality that team members on those teams must work apart. While some companies have not pushed their employees out of the office entirely, many are recommending that they not use common spaces like conference rooms for collaboration. Employees sit at their desks — together but apart. At the extreme end of the scale, many technology companies are sending their people home. This is because work-from-home arrangements are usually feasible in the technology space — employees often have laptops, even if they typically work in an office setting, and corporate infrastructure supports and easily adapts to a work-from-home model. Cautions But that doesn’t mean that it is as simple as grabbing your laptop, going home, and setting up shop on the kitchen table.</description>
      </item>
      <item>
         <title>Merchant silicon enabling network systems disaggregation</title>
         <link>https://www.broadcom.com/blog/merchant-silicon-enabling-network-systems-disaggregation</link>
         <guid>https://www.broadcom.com/blog/merchant-silicon-enabling-network-systems-disaggregation</guid>
         <pubDate>March 31, 2020</pubDate>
         <description>The term “disaggregation” has recently become fashionable and is being popularly applied to many fields, but the precise meaning when applied to networking systems warrants clarification: Network systems disaggregation in essence is enabling the separation (or “disaggregation”) of hardware and software using open interfaces, with simplified hardware based on high performance merchant silicon. The increased interest in disaggregation is driven by the compelling economics and accelerated innovation as compared to OEM chassis solutions. Different models of disaggregation are shown in Figure 1. Equipment purchased in a traditional OEM model typically consists of proprietary hardware and software, as shown in the far left column. Disaggregation enables end users to purchase simplified hardware from multiple vendors utilizing merchant silicon, and deploy these running either OEM specific or open source software stacks, using open APIs, Network Operating System (NOS) and associated applications (far right). Disaggregation by definition enables the flexibility to have any hardware work with any software, thereby enabling increased choice, and increased competition. Many models of disaggregation are possible depending on the end customer, market segment, ecosystem readiness, and software maturity. OEM or open source hardware may be paired with open or OEM/proprietary software (middle columns) as well. Broadcom’s partners include those running both kinds of software stacks, per their end requirements. Today major data center operators have led the way in terms of disaggregation, supported by merchant silicon and open source software and interfaces. At this point in time, the majority of operators have adopted a disaggregated model for switches, supported by organizations such as the Open Compute Project (OCP). They have also adopted NOS such as SONiC, developed by Microsoft, and FBoss from Facebook, both available as open source. These open source software projects are the result of cross-industry collaboration between end users, silicon vendors, and systems integrators, with</description>
      </item>
      <item>
         <title>FCC poised to enable 6-GHz band for Wi-Fi 6E – and we are thrilled</title>
         <link>https://www.broadcom.com/blog/fcc-to-enable-6-ghz-band-for-wifi-6e</link>
         <guid>https://www.broadcom.com/blog/fcc-to-enable-6-ghz-band-for-wifi-6e</guid>
         <pubDate>April 1, 2020</pubDate>
         <description>Today’s announcement by FCC Chairman Ajit Pai that the Commission will vote to open 1,200 MHz of unlicensed spectrum in the 6 GHz band positions the U.S. to lead the world in next-generation 5G services. All Americans could soon have Wi-Fi in a pristine, wireless superhighway to deliver digitally immersive experiences including education and telemedicine.

Broadcom Inc. is thrilled at the prospect of enabling the latest Wi-Fi 6E standard in the 6 GHz band this year. In the past few months, we announced a full ecosystem of Wi-Fi 6E devices for routers and smartphones, while also demonstrating the real-life speed and latency benefits of this new band. More importantly, we are actively working with partners to bring over 2 Gb/s of wireless data speeds to your palms and your homes soon. 

We commend Chairman Ajit Pai and his colleagues for their elegant vision for expanding the use of this critical band to include unlicensed technologies, and are excited for the opportunities this presents to shape the next 20 years of our connected world.

LEARN MORE

Read more about Wi-Fi 6E 
 in the 6 GHz band.
</description>
      </item>
      <item>
         <title>Getting Started with Virtual Agility</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/getting-started-with-virtual-agility</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/getting-started-with-virtual-agility</guid>
         <pubDate>April 1, 2020</pubDate>
         <description>In our previous post, we presented the idea of Virtual Agility as a means to maintain agile ways of working amidst a global health crisis. So now that we know why it’s important to embrace Virtual Agility, the next question is how do we implement it? Acknowledge the Change For organizations practicing traditional Agility, the idea of Virtual Agility is a completely new approach to how they’ll plan, execute, measure and deliver value. So let’s take a look at some challenges that organizations encounter along the way, and steps taken to mitigate and overcome those challenges. Initially, moving to a virtual model is a psychological challenge for team members. Not everyone is “wired” for remote work, and for those people, remote work can result in feelings of isolation, loneliness, lack of motivation, and a disconnect with co-workers. And so, in addition to the challenges revolving around the work, we need to be especially sensitive to the challenges individuals may be experiencing and address those as well. Organizational leaders also struggle with how their role (and the roles they manage) might change in terms of managing virtual teams. Concerns around the perceived negative impact to productivity if teams are not co-located are often raised: Will teams be able to effectively collaborate and connect in a virtual environment? Can motivation and productivity be sustained after such a transition? Will teams continue to have the tools necessary to effectively deliver customer value? Can Agile teams still perform all the necessary Agile ceremonies and Big Room Planning (BRP) sessions in a virtual environment? ​The good news is that the answers to these questions are YES. Prior to the current situation we find ourselves in, companies have successfully conducted business as usual in a virtual environment. The reasons for going virtual were different — perhaps</description>
      </item>
      <item>
         <title>AI in PPM: Spotting Trends</title>
         <link>https://www.broadcom.com/sw-tech-blogs/clarity/ai-in-ppm-spotting-trends</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/clarity/ai-in-ppm-spotting-trends</guid>
         <pubDate>April 1, 2020</pubDate>
         <description>There’s not a lot that goes right with planning is there? I mean, that’s how it’s supposed to work – it’s a plan so it’s going to be wrong, but still, it’s depressing that every time we plan portfolios, projects, resource allocations, heck even when we plan planning sessions, something comes along and messes up those plans before the virtual ink is dry.

What we need is someone who can constantly look at what’s happening and identify trends early enough that we can make adjustments before the plans become disrupted. But until now, we’ve never had a crystal ball in PPM. Today we do, in the shape of AI.

AI’s not some magic tool that can foresee the future, but if we feed AI-enabled PPM solutions enough of the right information they will identify the trends that are eventually going to disrupt our plans far sooner than we humans ever could. And that’s invaluable, not only because it provides more insight earlier, but because it frees the expensive human decision makers to focus their efforts on making the right decisions.

No longer are highly skilled professionals analyzing project reports and spreadsheets, instead they are reviewing AI-powered reports that provide true insight into what is happening in the business. They are deciding whether to follow the recommendations of the AI tools or take a different approach. And that’s the key: AI isn’t making our decisions for us. AI is making it easier for us to make our decisions.

And when you can make better decisions, in less time, the performance of the overall portfolio inevitably improves – translating directly into better business outcomes, which after all is the whole point of investing millions of dollars into the enterprise portfolio. With AI, that return can reach a whole new level.
</description>
      </item>
      <item>
         <title>Next-generation technology poised to hit later this year as FCC Chairman Pai moves to unlock the 6 GHz band for Wi-Fi</title>
         <link>https://www.broadcom.com/blog/pai-moves-to-unlock-the-6-ghz-band-for-wi-fi</link>
         <guid>https://www.broadcom.com/blog/pai-moves-to-unlock-the-6-ghz-band-for-wi-fi</guid>
         <pubDate>April 6, 2020</pubDate>
         <description>In a historic move, Federal Communications Commission (FCC) Chairman Ajit Pai displayed unprecedented leadership in wireless connectivity by proposing to free 1,200 MHz of spectrum in the 6 GHz band for unlicensed use—the first new spectrum allocation for Wi-Fi since 2003. If approved later this month, this perfectly balanced proposal is poised to unlock technological disruption for the next 25 years that will help Americans in every facet of their lives. This week’s announcement is a manifestation of many years of hard work and industry momentum to reach an elegant solution for unlocking this wireless superhighway. As the Chairman puts it, the decision was driven by physics. His proposal is set to deliver high-performance wireless experiences for our consumers and push America forward in rapid technological innovation—at homes, schools, hospitals, offices and stadiums. At homes and offices, the combination of Wi-Fi 6 and 6 GHz (touted Wi-Fi 6E ) will deliver stable, superfast Wi-Fi. For sports fans like Chairman Pai, the 6 GHz band is poised to power AR/VR technologies that will drastically change game-spectatorship for the better, and revamp the in-stadium experience entirely. At Broadcom, we pride ourselves in connecting everything. And Wi-Fi has been a key cog in that vision. As with prior generations of Wi-Fi, we brought Wi-Fi 6 first to the market in 2018 and quickly enabled over 150 Million devices last year. Earlier this year, we announced a portfolio of chips built to run in the 6 GHz band— eight Wi-Fi 6E access point solutions and the world’s first Wi-Fi 6E client device. With a test license from the FCC last month, Broadcom partnered with Intel to show these benefits in real time, demonstrating high speeds of ~2 Gbps and latency of less than 2 milliseconds for Chairman Pai, FCC Commissioners and many others on</description>
      </item>
      <item>
         <title>Digital Product Management 101 – Planning in uncertain times</title>
         <link>https://www.broadcom.com/sw-tech-blogs/clarity/digital-product-management-101-planning-in-uncertain-times</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/clarity/digital-product-management-101-planning-in-uncertain-times</guid>
         <pubDate>April 9, 2020</pubDate>
         <description>Nowhere is the impact of digital product management, or DPM, felt more than when it comes to planning. The very foundation of planning for most organizations is the selection of the projects, initiatives, and investments that will drive most business value. With DPM, the focus shifts away from projects to the management of digital assets for their entire duration. And as that focus shifts to a longer timeline, so too must planning adjust. Organizations must put greater focus on the concept of roadmaps. These represent the directional strategy for each digital product and form the basis of all planning for that product. Roadmaps are developed to set the business’ vision and strategy. They are tools to help communicate with customers and convey directional growth to development teams. Roadmaps morph and evolve over time, as the product manager refines them in response to new opportunities and shifting customer demands. This ability to quickly and easily adjust roadmaps is key to planning success. Roadmaps remain high level and directional until work is ready to begin. Only then will more detailed planning be undertaken, allowing a digital product to pivot with minimal resistance right up until the point where work is set to begin on any roadmap element. This integration of roadmaps with every element of the organization – an agile connection to the work, a transparent connection to the strategy, and an ability to communicate these connections to internal and external stakeholders, is what makes roadmaps so powerful. Especially in these uncertain times when our market and customer demands are shifting daily, roadmaps are a critical element of DPM and enable a business to realize the full potential of digital transformation and a digital product approach. Now more than ever companies need to have full transparency into their planning so they can</description>
      </item>
      <item>
         <title>Virtual Agility Principle #1</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/virtual-agility-principle-1</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/virtual-agility-principle-1</guid>
         <pubDate>April 10, 2020</pubDate>
         <description>In our previous post, we covered how to get started with implementing Virtual Agility. This blog will focus on the first key principle of Virtual Agility in greater detail. As organizations move to a new way of working that isolates each worker, everyone involved will need to adjust. Some will adjust better and more quickly than others. With any change like this, there will always be those who feel they have something to lose, and those who feel they have something to gain. Regardless of whether it’s real or imagined, these feelings have an impact. Trust is particularly important when it comes to acknowledging the change. Actions, words, and behaviors that imply things like “I’m not sure that this will work,” or even worse, “I’m not sure you will be able to do this,” acknowledge the change, but in very unhealthy and distrustful ways. For example, don’t start to “virtually hover” over people by checking in with them excessively for status updates, or audit time charging applications or chat tools that often display a real-time notification that someone is at their computer. Three key stages of transition In her work on The Twelve Failure Modes of Agile Transformation, Jean Tabaka talked about three key stages of transition. These stages are relevant to this discussion, because the move to Virtual Agility is a true transformation. The three stages are: Endings: People can find themselves disoriented and disenchanted. Guide them to let go of what they’ve believed about themselves and how they see themselves in their work environment. The neutral zone: To move forward, accept the reality of what was and what may be. It’s like standing in the middle of the street. You can’t stay there forever, but you know you have to be there to get to the other side.</description>
      </item>
      <item>
         <title>The Importance of Understanding Change and Service Impact as Part of Deploying Code to Production</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/change-and-service-impact-deploying-code-to-production</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/change-and-service-impact-deploying-code-to-production</guid>
         <pubDate>April 15, 2020</pubDate>
         <description>The role of SREs (site reliability engineers) has been changing drastically over the past decade. From being firefighters putting out fires, SREs are now looking to go right to the root cause of issues, and tackle them from step-one all the way to deployment. Further, modern approaches like AIOps are improving service levels in ways previously not thought possible. Let’s discuss the key trends that are impacting how code is deployed to production, and how SREs can use AIOps to improve the entire process. Start with Code Creation The software development life cycle has been shifting left. AIOps follows this trend and puts a tremendous focus on the initial parts of the process. Dev teams have become collaborative. As code is written by numerous developers all working in a distributed manner, their code contributions are managed using modern code repository solutions like Git. Once written, code is committed from a local machine, and automated tests are run on the code. These build scans, or dry runs as they’re sometimes called, are preliminary checks for quality of code. Following the shift-left movement, SREs now have good reason to encourage such early automated checks. The best time to spot a bug is right at development. Before, this wasn’t possible; but thanks to repository-based development, there is increased visibility and collaboration right from step-one. Enforce Quality Control Alongside QA Teams Once code passes the dry run, the build process is initiated by a CI server like Jenkins. This CI process also includes automated testing. Here, unit tests and integration tests are run to see how the code interacts with existing services. This is a crucial step, not just for QA but also for SREs. While QA owns the creation of test scripts and execution of these test scripts, SRE is a key stakeholder</description>
      </item>
      <item>
         <title>New Analyst Research Reveals Improved Visibility Needed From Traditional Network Monitoring Software</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/new-analyst-research-reveals-improved-visibility-needed-from-traditional-network-monitoring-software</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/new-analyst-research-reveals-improved-visibility-needed-from-traditional-network-monitoring-software</guid>
         <pubDate>April 13, 2020</pubDate>
         <description>Today’s data deluge, resulting from the changing landscape of network technologies, demands an innovative approach to networking and network monitoring. Organizations need visibility across current, modern, and edge communication paths to ensure the delivery of a reliable customer experience. Cloud Migration When workloads move to the cloud, network monitoring software visibility is lost. Moving applications to hybrid infrastructure creates visibility gaps for network operations teams as they may no longer have the operational insight needed to be effective. Yet it is critical that applications continue to deliver high levels of responsiveness and availability-at all times, no matter if the application is deployed in the data center, private cloud, public cloud, or a combination of all three. Edge Networking A recent exponential growth of SD-WAN, IoT, and faster networking technologies, such as 5G, which support real-time applications like video processing, analytics, or even self-driving cars, has caused new technologies to now produce more traffic at the edge of data center networks. Traditionally, data computation is done at the core. But, to avoid latency issues that will affect application performance, IoT and 5G require processing and storage that is closer to the edge of the network where this data is being gathered. Additionally, research has shown that by 2025, “5G networks will carry nearly half of the world’s mobile data traffic.”1 This tsunami of data as well as new application latency requirements will demand a new assurance approach from network monitoring software vendors. Cloud-Native Architectures As the use of containers and micro-services continues to grow, network monitoring software must adapt to the change in visibility and flow of network traffic. Monitoring container-to-container traffic flows via APIs is a good starting point and will help close the visibility gaps created by these new architectures. Recommendations Leading analysts recommend a future-proof monitoring strategy by</description>
      </item>
      <item>
         <title>Domain-specific switch silicon in networking</title>
         <link>https://www.broadcom.com/blog/domain-specific-switch-silicon-in-networking</link>
         <guid>https://www.broadcom.com/blog/domain-specific-switch-silicon-in-networking</guid>
         <pubDate>April 13, 2020</pubDate>
         <description>Hennessey and Patterson argue in their article “A New Golden Age for Computer Architecture” that domain-specific architectures, as opposed to general purpose computing, are a viable path to improve performance and efficiency as they match applications to processor architecture. A similar analogy exists in networking infrastructure as we see three distinct market segments, Hyper-Scale Data Center, Service Provider and Enterprise with very different requirements. In designing switches and routers for network infrastructure, there are several dimensions to consider when optimizing for each segment. Optimizing for all metrics simultaneously is not a matter of engineering capability but is rather a restriction imposed by silicon technology and economics. For example, optimizing for low latency requires decentralized databases rather than centralized databases. This is because accessing a centralized database would imply longer wires from different stages and therefore longer latency. Figure 1: Key switch design dimensions For Hyper-Scale Data Center network infrastructure, high bandwidth, switch radix, and latency are the most critical metrics since application bandwidth demands are rising much faster than other networking segments. Broadcom’s Tomahawk® family of switches are optimized to address the bandwidth challenges of the Hyper-Scale Data Center. Broadcom recently announced Tomahawk 4, the world’s first 25.6Tb/s switch for compute, storage and AI cluster connectivity. In the Service Provider segment, the infrastructure is highly shared between various enterprises, consumers utilizing different applications over long distance transport. The Service Provider segment typically requires larger packet buffers, expandable tables, and rich features sets for edge and Internet peering applications. Broadcom’s DNX line of processors are specifically optimized to address this segment. Broadcom’s Jericho 2, the industry’s leading packet processor for Service Provider applications, is now in full production. For chassis applications, Broadcom provides a Jericho 2 packet processor and an optimized fabric (Ramon) for chassis which typically results in overall power</description>
      </item>
      <item>
         <title>How AIOps Helps an SRE with Their Daily Work</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/how-aiops-helps-an-sre-with-their-daily-work</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/how-aiops-helps-an-sre-with-their-daily-work</guid>
         <pubDate>April 27, 2020</pubDate>
         <description>Being a Site Reliability Engineer (SRE) is not an easy job. You have to manage code deployment, configuration, monitoring, etc. so that everything works in production without any problems. Triage, troubleshooting, remediation, and support are, for the most part, done manually. No matter how good you are, these processes are error-prone and require a lot of effort. Automating them is the goal of the new tooling movement around AIOps. What is AIOps? AIOps stands for Artificial Intelligence in IT Operations. It makes use of advanced machine learning algorithms and AI techniques to analyze Big Data from various IT and business operations tools, speeding up service delivery, increasing IT efficiency, and delivering superior user experience. AIOps breaks away from siloed operations management. AIOps is essentially applying machine learning algorithms to the vast amounts of data available in order to provide insights and make a higher level of automation possible. IT Ops no longer needs to largely depend on human operators for the modern software development life cycle (SDLC). Solutions powered by AIOps retrieve their intelligence from a variety of resources and give analytics platforms access to this stored data. Simply said, AIOps delivers automatic diagnostics and metric-driven continuous improvement for the development (dev) and operations (ops) teams across the entire SDLC. What are the main features of AIOps in helping SRE? Correlate and Analyze Disparate Datasets One of the techniques used in AIOps is Topology Analytics. Using this technique your SRE team can consume and correlate intelligence from multiple architectural layers. The root cause of your issue can be identified this way and will also be automatically and effectively remediated. This is much faster and more efficient than simply manually tracking symptoms and fixing them. Holistic Visibility of Your Digital Delivery Chain By using AIOps, you can visualize two important parts</description>
      </item>
      <item>
         <title>Preparing for Day One in New Digital Business World</title>
         <link>https://www.broadcom.com/sw-tech-blogs/clarity/preparing-for-day-one-in-new-digital-business-world</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/clarity/preparing-for-day-one-in-new-digital-business-world</guid>
         <pubDate>April 16, 2020</pubDate>
         <description>It’s inspiring to watch businesses coming to grips with the impact of COVID-19. In every industry, the more forward-thinking companies have stopped reacting to the current situation and instead are planning for a future after COVID-19. And they aren’t looking at it as a return to the old normal, but as an opportunity to create something new and exciting. Those are words you’ve probably not heard much lately – opportunity, exciting – but they are part of your future. Make no mistake, there will be significant challenges ahead, and if you are currently in crisis response mode, struggling with difficult furlough decisions, then exciting opportunities may seem like a distant fantasy. We need to prepare, however, because the future is closer than we think. Because COVID-19 disrupted every aspect of life we need to reassess the fundamentals of our business. There is a need to press the restart button, to create a Day One of the post-COVID-19 world. Those of you who’ve been through of a corporate merger will know what I mean: The official first day at the combined company brings fresh opportunities for exciting innovation and the chance to take on new challenges. But it also triggers a great amount of fear of the unknown. That’s the future for every business. Recovery isn’t about resumption, it’s not about restarting what was stopped, it is about embracing new opportunities, prioritizing investments, embracing innovation and delivering rapid success. In this blog series we’re going to explore some of those elements, including: Adaptive strategy management Embracing new and changed business models to optimize digital relationships The shift to an innovation-driven, digitally-enabled operating model We know you’re busy dealing with the fallout of this pandemic right now. But we also know that as leaders you’re starting to turn your thoughts to the</description>
      </item>
      <item>
         <title>Virtual Agility Principle #2</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/virtual-agility-principle-2</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/virtual-agility-principle-2</guid>
         <pubDate>April 16, 2020</pubDate>
         <description>In our previous post, we covered the first principle of virtual agility — Acknowledge the Change. This blog will focus on the second key principle of Virtual Agility in greater detail. This principle is the most obvious one, or so it seems, because in agile and lean environments, collaboration is critical. Agile teams work together to develop solutions and solve problems, and that requires near-constant communication and sharing of information — collaboration that is seamless, easy, and always available. We aren’t talking about tools people need to actually do their jobs. For example, software developers who must produce working software need IDEs and the whole suite of tools that fall into the DevOps space. Finance workers need accounting software, salespeople need CRM tools, and so forth. Most organizations are very good at providing those to their workers, and making them accessible through secure means such as corporate VPNs. We are talking specifically about the virtual support that allows these groups to function as teams with the same effectiveness virtually as they do in the corporate office. Virtual support for the enterprise Many tools that effectively support collaborative ways of working are already available in corporate environments, but are often not used by the entire workforce. For example, I use Cisco’s WebEx product for remote meetings. I am very familiar with WebEx because I use it all the time, but it has many advanced features, and I find that I’m always learning something new. I’ve noticed that some of our employees who are used to working in our corporate office aren’t as familiar with some of the usage basics — how to start/join meetings, options for connecting audio and video, muting/unmuting. Sometimes a quick 10-minute overview for the general team can go a long way toward helping people past these impediments</description>
      </item>
      <item>
         <title>Why It’s More Important Than Ever to Have Reliable Network Monitoring Software to Keep a Healthy Connection Between the Enterprise and Service Providers</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/why-it-s-more-important-than-ever-to-have-reliable-network-monitoring-software-to-keep-a-healthy-connection-between-the-enterprise-and-service-providers</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/why-it-s-more-important-than-ever-to-have-reliable-network-monitoring-software-to-keep-a-healthy-connection-between-the-enterprise-and-service-providers</guid>
         <pubDate>April 17, 2020</pubDate>
         <description>Our New Reality and the Impact on Networks and Network Operations Today, network operations teams face many challenges brought on by today’s global crisis; but making sure their networks can service the additional volume is certainly a high priority; while making sure their network monitoring software can aid them in this effort is a critical mandate. The impact on IT operations can be illustrated in a U.S. telecommunications provider’s recent report on the increase in network traffic due to our new reality1: 60% increase in peak traffic 65% increase in digital voice 212% increase in conferencing 38% in streaming video 25% increase in video on demand 24% increase in WiFi It just may be more important today than ever, to understand the impact of network performance issues on the customer experience. Many folks cannot submit unemployment claims on-line and are extremely frustrated. Organizations cannot submit emergency funding requests to financial institutions and get the stimulus money they need to survive. Healthcare organizations cannot process the results of patient tests in time to save lives. To address the situation, organizations are incurring additional unforeseen costs to boost the headcount at their call centers and performing what should be automated tasks manually; and in the process, frustrating users and getting unwanted press coverage. Additionally, many state agencies, financial institutions, and healthcare organizations find their MPLS circuits cannot handle the sudden surge in network volume. Swarmed with customers checking for their $1,200 in stimulus money from the federal government, many online banking firms reported that their services were temporarily unavailable on Wednesday, April 15th. Down or interrupted services were reported for SunTrust, BB&amp;T, U.S. Bank, JPMorgan Chase and Citi, among others.2 While it may be too late for some organizations to react, those using DX NetOps network monitoring software from Broadcom are better</description>
      </item>
      <item>
         <title>New Analyst Research Suggests Modern Network Monitoring Tools Should offer Business-level Analytics and Automated Workflows</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/new-analyst-research-suggests-modern-network-monitoring-tools-should-offer-business-level-analytics-and-automated-workflows</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/new-analyst-research-suggests-modern-network-monitoring-tools-should-offer-business-level-analytics-and-automated-workflows</guid>
         <pubDate>April 17, 2020</pubDate>
         <description>The lack of alignment between the business and IT is one of the main reasons why digital transformation initiatives do not realize their full potential. In particular, network operations teams need advanced capabilities from network monitoring tools to guarantee the reliable delivery of business services that will help the business succeed. Network monitoring tools need to provide more than just basic levels of monitoring and management. They need to break down the silos of traditional monitoring and combine metrics from application performance monitoring, infrastructure monitoring and user experience monitoring into an open data lake driven by artificial intelligence (AI) and machine learning (ML). This will lead to use cases like customer experience and service analytics, performance and alarm analytics, predictive capacity analytics and automated triage. Teams that employ practices combining business goals with AI-driven data analysis and remediation that connect business and technology functions together will effectively achieve desired business outcomes. Teams need to adopt new AI and ML technologies that augment and even automate decision-making within a business outcome context. Through this approach, enterprises can establish the continuous insights and collective intelligence that optimize decision-making. Some will say that while today’s network monitoring tools are still important, they have been around for a very long time and are becoming increasingly commoditized, with blurred lines of differentiating functionality between vendors. These tools are more likely to distinguish themselves through advanced capabilities such as AIOps, application-awareness and security. Recommendations Leading analysts recommend improving alignment with business objectives and their requirements for network visibility and agility by evaluating network monitoring tools that offer business-level analytics and integration with automated workflows. Broadcom agrees with leading analysts that a modern network monitoring tool should apply AIOps-driven functionality to ingest data from a variety of data sources like raw packet and flow data; along with</description>
      </item>
      <item>
         <title>Virtual Agility Principle #3</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/virtual-agility-principle-3</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/virtual-agility-principle-3</guid>
         <pubDate>April 20, 2020</pubDate>
         <description>In previous posts, we addressed the importance of acknowledging change that’s suddenly imposed upon an organization, and how trust, empathy, and alignment should drive the transition to the new way of working. In addition, we talked about the need for a robust infrastructure to enable employees to connect and communicate in a virtual environment. Today, we’re focusing on employees who find themselves working in a remote environment for the very first time. For those used to physically working in an office with their colleagues on a daily basis, remote work can be a significant change, and even a challenge. Imagine, for a moment, being an employee on their first day into transitioning to a virtual environment and the questions going through their mind. I know when I first transitioned to a remote position, I really didn’t know what to expect with regard to day-to-day operations: How do I get trained in Rally Software? How do I learn Google Docs and Hangouts? How do I perform effective product demonstrations? How do I meet with my manager to answer some of these questions? And the list went on. Now, I was just one person joining an existing team, but the underlying concerns are the same for those who’ve worked in a traditional environment for some time, and now find themselves working in a virtual environment for the very first time. It’s an organizational concern as well. In addition to navigating the new model, there are differences in skills throughout an organization. For those of us in IT, we feel comfortable with the core set of products and skills for day-to-day work and communication, but that’s not true for employees across the organization. So, an organization has to develop an approach to “level the playing field” across the workforce, to reach a fundamental</description>
      </item>
      <item>
         <title>Spotlight on 2020 IBM Champions</title>
         <link>https://www.broadcom.com/sw-tech-blogs/mainframe/spotlight-on-2020-ibm-champions</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/mainframe/spotlight-on-2020-ibm-champions</guid>
         <pubDate>April 20, 2020</pubDate>
         <description>“Champions don’t show up to get everything they want; they show up to give everything they have” Each year, IBM recognizes innovative thought leaders in the technical community with the IBM Champions Program. IBM Champions demonstrate both expertise in and extraordinary support and advocacy for IBM technology, communities, and solutions. IBM Champions are enthusiasts and advocates: IT professionals, business leaders, developers, executives, educators, and influencers who support and mentor others to help them get the most out of IBM software, solutions, and services. The program rewards their contributions by amplifying their voice and increasing their sphere of influence. The criteria to become a Champion is based on individual contributions that go far beyond the normal scope of a “job”; those who give generously their time, effort, intellect and energy get recognized and rewarded for their contributions. IBM received nearly 1,400 nominations in 2020. Looking for those who have the most consistent and exceptional contributions, the selection committee members chose 604 new IBM Champions. These individuals are considered among the best in their respective fields of expertise, sharing their knowledge to help grow communities of professionals. Champions spend a considerable amount of their own time, energy, and resources on community efforts: organizing and leading user group events, answering questions in forums, contributing articles and applications, publishing podcasts, sharing instructional videos, and other activities. At CA Technologies, a Broadcom Company, we are proud of our 2020 IBM Champions, who provide significant contributions, especially to the Db2 community. Please take a moment to learn about our Champions: Philippe Dubost A 14-year IT professional and big fan of Db2 for z/OS, Philippe Dubost is the founder of csDUG, the Db2 Regional Users Group for Czech Republic and Slovakia. Philippe works as Market Manager at CA Technologies, a Broadcom company. In this role, he is</description>
      </item>
      <item>
         <title>Broadcom receives award for female leadership at highest levels</title>
         <link>https://www.broadcom.com/blog/broadcom-receives-award-for-female-leadership-at-highest-levels</link>
         <guid>https://www.broadcom.com/blog/broadcom-receives-award-for-female-leadership-at-highest-levels</guid>
         <pubDate>April 22, 2020</pubDate>
         <description>Broadcom has been recognized for its commitment to achieving gender balance at the highest ranks of its leadership corps by the 2020 Women on Boards organization. The award – a Winning “W” – recognizes public corporations achieving 20 percent or higher female representation on their corporate boards. Broadcom currently has nine members on its Board of Directors, three of whom are women – Diane Bryant, Gayla Delly and Justine Page – and won the award for its 2019 representation.

&quot;Broadcom is proud to be formally recognized for its ongoing commitment to gender diversity,” said Hock Tan, Broadcom president and CEO. “An important part of our strength as a company is the perspectives and talents our diverse teams and leadership bring to work every day, and we are honored to have received this award.&quot;

2020 Women on Boards is an advocacy organization committed to driving gender balance on corporate boards of directors.

 
</description>
      </item>
      <item>
         <title>Virtual Agility Principle #4</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/virtual-agility-principle-4</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/virtual-agility-principle-4</guid>
         <pubDate>April 23, 2020</pubDate>
         <description>In our previous post, we covered the third principle of Virtual Agility, which is leave no one behind. This post will focus on the fourth principle, which is to be purposeful. Due to recent developments, many companies were forced to re-deploy their entire workforce to a virtual environment. For most, it’s an entirely new experience. And while their first priority is to adjust to the logistics of the new model, they’ll soon be looking to return to business as usual. For many organizations, that means continuing to deliver value through agile practices. But those practices were most likely formed with co-located teams, or perhaps a hybrid approach that combined remote and co-located team members. The new, all virtual landscape, presents a new set of challenges to continue those practices, and many companies are unsure how to proceed. The good news is that there are many organizations who are fully remote, by choice, and are very successful in practicing agility, both at the team level, and at scale. And with purposeful planning and preparation, that success can be replicated for companies that suddenly find themselves working in a remote environment for the very first time. There will be challenges, to be sure. But the ability of an organization to realize and adjust to a new path based on changing circumstances is what being agile is all about. Below we discuss a number of principles that we’ve found to be important to support the new operating model, while keeping the focus on outcomes. Restore Predictable Ceremonies It’s essential for teams to return to a predictable meeting cadence to resume team collaboration and maintain alignment across the team. Remote work inherently comes with a great deal of autonomy, but for that to work, teams must be aligned. Respect Team Time Zones I’ve been</description>
      </item>
      <item>
         <title>From Chaos to Clarity: How to accelerate your investment re-planning</title>
         <link>https://www.broadcom.com/sw-tech-blogs/clarity/from-chaos-to-clarity-how-to-accelerate-your-investment-re-planning</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/clarity/from-chaos-to-clarity-how-to-accelerate-your-investment-re-planning</guid>
         <pubDate>April 23, 2020</pubDate>
         <description>What’s the most important thing you have to do on your first day back from the COVID-19 disruption? Regardless of your industry or location, size or long-term goals the answer is: start planning your future. But real planning – none of this develop business cases that are more sales pitch than objective analysis and then pick the ones that sound good. You need planning that is going to recover your business. That’s going to build resilience, re-engage and re-empower your people, start the process of rebuilding your business and provide a springboard to seize the opportunities that are ahead of you. You need to do the kind of planning that starts – and ends – at the top. And as a leader, you are going to be incredibly busy with all the decisions that come with returning your business to normal operations – or what will pass for normal in the new business world ahead of us. So, you don’t have a lot of time to dedicate to planning, no matter how much you want to. You also know that a fair chunk of the plans you develop are going to have to change anyway because the new reality will evolve and morph, as things begin to come back on stream in your business and around the globe. So yes, planning is the most important thing you have to do. But it must be: Effective Adaptive Strategic Top-down Let’s be honest, that doesn’t sound a lot like pre-pandemic planning, does it? That’s where Clarity’s roadmapping functionality comes in. No PowerPoint presentations, no hours of manual Excel updates. Just the ability to quickly pull ideas together for products, objectives or initiatives, tie them to your strategies, budgets and people, and then monitor the business outcomes. And if (when) things change, you</description>
      </item>
      <item>
         <title>Introducing Business Payload Analyzer: A New Approach for Insights into Business and Customer Experience</title>
         <link>https://www.broadcom.com/sw-tech-blogs/aiops/business-payload-analyzer-a-new-approach-for-insights-into-business-and-customer-experience</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/aiops/business-payload-analyzer-a-new-approach-for-insights-into-business-and-customer-experience</guid>
         <pubDate>April 9, 2020</pubDate>
         <description>In today's digitally-driven world, ensuring a positive customer experience is essential to success. As many organizations undergo digital transformation initiatives, the ability to understand how customer experience impacts the business has become increasingly important. Operations and business teams need a unified view of the health and performance of business critical applications in order to establish a common understanding of how certain performance issues might impact the bottom line. While traditional end-user monitoring and Application Performance Management (APM) solutions are able to provide some of this data, today's current methods for collecting customer experience data still present many challenges: Development-driven tagging and match-based definitions requires deep application knowledge and can be hard to maintain. Network taps are not able to handle high volumes of traffic. Cloud applications can be very complex and create blind-spots in understanding user experience. Due to these limitations, the majority of organizations are still struggling to gain insight and prioritize problems based on impacts to business goals and to the overall customer experience—in fact, 53% of companies state their current tools do not provide the metrics they require. A new approach is needed to simplify the collection of key business metrics. As a result, DX Application Performance Management (DX APM) now has a new capability called Business Payload Analyzer (BPA) which uses data science and natural language processing techniques to reduce the maintenance and deployment challenges of current end-user monitoring tools. BPA is designed for application owners who want greater visibility into how the end-user experience and business key performance indicators (KPIs) align with application performance - providing the visibility needed to better prioritize issues based on user experience and improve the overall digital experience. In the following videos, you will see first hand how to get started using Business Payload Analyzer to enable custom transaction naming</description>
      </item>
      <item>
         <title>Virtual Agility Principle #5</title>
         <link>https://www.broadcom.com/sw-tech-blogs/rally/virtual-agility-principle-5</link>
         <guid>https://www.broadcom.com/sw-tech-blogs/rally/virtual-agility-principle-5</guid>
         <pubDate>April 24, 2020</pubDate>
         <description>You might ask, why do we need a principle of Virtual Agility that says Be Agile? Isn’t that self-evident? We like to tell folks that “work is work.” That is really just shorthand for the concept that all the work that agile teams do should be managed in an agile manner — not just the work that they do to build their product(s). And we see too many organizations who embark on transformational journeys around agility, and they don’t treat the work that these transformations involve as work to be managed in an agile fashion. Transformations introduce more “work about the work” — there is no way around this. The goal is for this work about the work to decrease over time, and in many areas be eliminated. Some of it should just become ingrained, natural habits that are easily executed as a matter of course — for example, establishing and maintaining backlogs, and holding the required agile ceremonies. It’s not rocket science to Be Agile as you transition to Virtual Agility. Just be mindful of the agile principles that you already know and use: Organize teams to handle particular work for the transition. Build a prioritized backlog of things to be done. Pull work from that backlog in small batches. Either Kanban or Scrum processes work well for this. Demo work that is done so that everyone can provide fast feedback. Hold retrospectives. Iterate. Organize Teams If you already have an Agile Center of Excellence or similar group, you are ahead of the curve. Create a higher-level team to handle work that pertains to all agile teams in an organization. Resolve impediments that impact all teams. This is important to ensure that there is consistency in the transition. This team can create and disseminate communications and serve as a</description>
      </item>
      <item>
         <title>Silicon innovations in programmable switch hardware</title>
         <link>https://www.broadcom.com/blog/silicon-innovations-in-programmable-switch-hardware</link>
         <guid>https://www.broadcom.com/blog/silicon-innovations-in-programmable-switch-hardware</guid>
         <pubDate>April 27, 2020</pubDate>
         <description>Network switches have evolved from a fixed function to highly configurable to completely programmable hardware. Broadcom has long supported programmable switch hardware on the DNX line of products with the latest packet processor, Jericho 2 – a 10Tb router in full production addressing the router market. In 2019, Broadcom introduced and sampled Trident 4, a 12.8Tb/s compiler programmable switch to address enterprise data center and campus markets. The Trident 4 announcement was accompanied by the release of Network Programming Language (nplang.org), an open language designed to program the data plane of a new generation of switches. The NPL language was developed primarily to best express the capability of underlying silicon and to provide programmability at scale (See Broadcom’s new Trident 4 and Jericho 2 switch devices offer programmability at scale in the Broadcom blog, June 27, 2019, for further exposition.) Building a switch that can excel in feature capacity and feature concurrency while allowing for programmability requires fundamental silicon innovations to build an efficient hardware. Efficient hardware architecture is the primary driver to derive maximum potential from any technology node. The principles of efficient hardware architecture encompass area, peak power, average power, low latency and features. With programmability, end customers can maximize the benefits of the efficient hardware depending on the use case. While programmability allows new and custom features, it can very well be used to control the basic requirements of power and latency. NPL provides the capability to achieve all aspects of customers’ needs. Efficient hardware architecture involves multiple dimensions. Primary components are flexible storage and flexible processing. Flexible storage allows users to store different databases, perform flexible lookups and interpret results in different ways. Flexible storage needs to be optimized for different protocols with innovative memory techniques to reduce power consumption in lookups and compare operations. Flexible</description>
      </item>
   </channel>
</rss>
