当社はデータインフラストラクチャの将来を開拓しています。

Posts Tagged 'QLogic'

  • February 08, 2022

    ストレージネットワーキングの次の進化: 自動運転SAN

    By Todd Owens, Field Marketing Director, Marvell and Jacqueline Nguyen, Marvell Field Marketing Manager

    Storage area network (SAN) administrators know they play a pivotal role in ensuring mission-critical workloads stay up and running. The workloads and applications that run on the infrastructure they manage are key to overall business success for the company.

    Like any infrastructure, issues do arise from time to time, and the ability to identify transient links or address SAN congestion quickly and efficiently is paramount. Today, SAN administrators typically rely on proprietary tools and software from the Fibre Channel (FC) switch vendors to monitor the SAN traffic. When SAN performance issues arise, they rely on their years of experience to troubleshoot the issues.

    What creates congestion in a SAN anyway?

    Refresh cycles for servers and storage are typically shorter and more frequent than that of SAN infrastructure. This results in servers and storage arrays that run at different speeds being connected to the SAN. Legacy servers and storage arrays may connect to the SAN at 16GFC bandwidth while newer servers and storage are connected at 32GFC.

    Fibre Channel SANs use buffer credits to manage the prioritization of the traffic flow in the SAN. When a slower device intermixes with faster devices on the SAN, there can be situations where response times to buffer credit requests slow down, causing what is called “Slow Drain” congestion. This is a well-known issue in FC SANs that can be time consuming to troubleshoot and, with newer FC-NVMe arrays, this problem can be magnified. But these days are soon coming to an end with the introduction of what we can refer to as the self-driving SAN.

  • June 02, 2021

    NVMeでデジタルの渋滞を解消する

    By Ian Sagan, Marvell Field Applications Engineer and Jacqueline Nguyen, Marvell Field Marketing Manager and Nick De Maria, Marvell Field Applications Engineer

    Have you ever been stuck in bumper-to-bumper traffic? Frustrated by long checkout lines at the grocery store? Trapped at the back of a crowded plane while late for a connecting flight?

    Such bottlenecks waste time, energy and money. And while today’s digital logjams might seem invisible or abstract by comparison, they are just as costly, multiplied by zettabytes of data struggling through billions of devices – a staggering volume of data that is only continuing to grow.

    Fortunately, emerging Non-Volatile Memory Express technology (NVMe) can clear many of these digital logjams almost instantaneously, empowering system administrators to deliver quantum leaps in efficiency, resulting in lower latency and better performance. To the end user this means avoiding the dreaded spinning icon and getting an immediate response.

  • August 27, 2020

    2020年にNVMe over Fabricのメリットを享受する方法

    By Todd Owens, Field Marketing Director, Marvell

    As native Non-volatile Memory Express (NVMe®) share-storage arrays continue enhancing our ability to store and access more information faster across a much bigger network, customers of all sizes – enterprise, mid-market and SMBs – confront a common question: what is required to take advantage of this quantum leap forward in speed and capacity?

    Of course, NVMe technology itself is not new, and is commonly found in laptops, servers and enterprise storage arrays. NVMe provides an efficient command set that is specific to memory-based storage, provides increased performance that is designed to run over PCIe 3.0 or PCIe 4.0 bus architectures, and -- offering 64,000 command queues with 64,000 commands per queue -- can provide much more scalability than other storage protocols.

アーカイブス