当社はデータインフラストラクチャの将来を開拓しています。

Archive for the 'Storage' Category

  • August 17, 2023

    Marvell Bravera SC5 SSDコントローラがFMS 2023ベスト・オブ・ショー・アワードで受賞

    By Kristin Hehir, Senior Manager, PR and Marketing, Marvell

     

    Marvell and Memblaze were honored with the “Most Innovative Customer Implementation” award at the Flash Memory Summit (FMS), the industry’s largest conference featuring flash memory and other high-speed memory technologies, last week.
    Powered by the Marvell® Bravera™ SC5 controller, Memblaze developed the PBlaze 7 7940 GEN5 SSD family, delivering an impressive 2.5 times the performance and 1.5 times the power efficiency compared to conventional PCIe 4.0 SSDs and ~55/9us read/write latency1. This makes the SSD ideal for business-critical applications and high-performance workloads like machine learning and cloud computing. In addition, Memblaze utilized the innovative sustainability features of Marvell’s Bravera SC5 controllers for greater resource efficiency, reduced environmental impact and streamlined development efforts and inventory management.

  • June 13, 2023

    FC-NVMeがHPEの次世代ブロックストレージの主流に

    By Todd Owens, Field Marketing Director, Marvell

    While Fibre Channel (FC) has been around for a couple of decades now, the Fibre Channel industry continues to develop the technology in ways that keep it in the forefront of the data center for shared storage connectivity. Always a reliable technology, continued innovations in performance, security and manageability have made Fibre Channel I/O the go-to connectivity option for business-critical applications that leverage the most advanced shared storage arrays.

    A recent development that highlights the progress and significance of Fibre Channel is Hewlett Packard Enterprise’s (HPE) recent announcement of their latest offering in their Storage as a Service (SaaS) lineup with 32Gb Fibre Channel connectivity. HPE GreenLake for Block Storage MP powered by HPE Alletra Storage MP hardware features a next-generation platform connected to the storage area network (SAN) using either traditional SCSI-based FC or NVMe over FC connectivity. This innovative solution not only provides customers with highly scalable capabilities but also delivers cloud-like management, allowing HPE customers to consume block storage any way they desire – own and manage, outsource management, or consume on demand.HPE GreenLake for Block Storage powered by Alletra Storage MP

    At launch, HPE is providing FC connectivity for this storage system to the host servers and supporting both FC-SCSI and native FC-NVMe. HPE plans to provide additional connectivity options in the future, but the fact they prioritized FC connectivity speaks volumes of the customer demand for mature, reliable, and low latency FC technology.

  • January 04, 2023

    ソフトウェアで定義された自動車のためのソフトウェアで定義されたネットワーキング

    マーベル、オートモーティブビジネスユニット、マーケティング担当バイスプレジデント、アミール・バー・ニヴ、ソナタス、および最高マーケティング責任者、ジョン・ハインライン、マーベル、オートモーティブビジネスユニット、SW担当副社長、サイモン・エーデルハウス著

    ソフトウェアデファインド車両(SDV)は、自動車業界における最新かつ最も興味深いメガトレンドのひとつである。 以前のブログ で述べたように、この新しいアーキテクチャーとビジネス・モデルが成功する理由は、すべての利害関係者にメリットをもたらすからである:

    • OEM(自動車メーカー)は、アフターマーケットサービスや新たなアプリケーションから新たな収入源を得るだろう。
    • 車の所有者は、車の機能や特徴を簡単にアップグレードできる。
    • モバイル通信事業者は、新しいアプリケーションによる車両データ消費の増加から利益を得るだろう。

    ソフトウェアデファインド車両とは何か? 正式な定義はないが、この用語は、柔軟性と拡張性を可能にするため、車両設計におけるソフトウェアの使用方法の変化を反映している。 ソフトウェアデファインド車両をよりよく理解するためには、まず現在のアプローチを検証する必要がある。

    今日の自動車機能を管理する組込み制御ユニット(ECU)にはソフトウェアが含まれているが、各ECUのソフトウェアは他のモジュールと互換性がなく、孤立していることが多い。 更新が必要な場合、車両の所有者はディーラーのサービスセンターに出向かなければならず、所有者は不便を強いられ、メーカーにとってはコストがかかる。

  • November 28, 2022

    驚異のハック - SONiCユーザーの心をつかむ

    By Kishore Atreya, Director of Product Management, Marvell

    Recently the Linux Foundation hosted its annual ONE Summit for open networking, edge projects and solutions. For the first time, this year’s event included a “mini-summit” for SONiC, an open source networking operating system targeted for data center applications that’s been widely adopted by cloud customers. A variety of industry members gave presentations, including Marvell’s very own Vijay Vyas Mohan, who presented on the topic of Extensible Platform Serdes Libraries. In addition, the SONiC mini-summit included a hackathon to motivate users and developers to innovate new ways to solve customer problems. 

    So, what could we hack?

    At Marvell, we believe that SONiC has utility not only for the data center, but to enable solutions that span from edge to cloud. Because it’s a data center NOS, SONiC is not optimized for edge use cases. It requires an expensive bill of materials to run, including a powerful CPU, a minimum of 8 to 16GB DDR, and an SSD. In the data center environment, these HW resources contribute less to the BOM cost than do the optics and switch ASIC. However, for edge use cases with 1G to 10G interfaces, the cost of the processor complex, primarily driven by the NOS, can be a much more significant contributor to overall system cost. For edge disaggregation with SONiC to be viable, the hardware cost needs to be comparable to that of a typical OEM-based solution. Today, that’s not possible.

  • October 26, 2022

    64Gファイバー・チャネルのテイスティング・ノート

    By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell

    While age is just a number and so is new speed for Fibre Channel (FC), the number itself is often irrelevant and it’s the maturity that matters – kind of like a bottle of wine! Today as we make a toast to the data center and pop open (announce) the Marvell® QLogic® 2870 Series 64G Fibre Channel HBAs, take a glass and sip into its maturity to find notes of trust and reliability alongside of operational simplicity, in-depth visibility, and consistent performance.

    Big words on the label? I will let you be the sommelier as you work through your glass and my writings.

    Marvell QLogic 2870 series 64GFC HBAs

  • October 12, 2022

    クラウドストレージとメモリーの進化

    By Gary Kotzur, CTO, Storage Products Group, Marvell and Jon Haswell, SVP, Firmware, Marvel

    The nature of storage is changing much more rapidly than it ever has historically. This evolution is being driven by expanding amounts of enterprise data and the inexorable need for greater flexibility and scale to meet ever-higher performance demands.

    If you look back 10 or 20 years, there used to be a one-size-fits-all approach to storage. Today, however, there is the public cloud, the private cloud, and the hybrid cloud, which is a combination of both. All these clouds have different storage and infrastructure requirements. What’s more, the data center infrastructure of every hyperscaler and cloud provider is architecturally different and is moving towards a more composable architecture. All of this is driving the need for highly customized cloud storage solutions as well as demanding the need for a comparable solution in the memory domain.

  • September 26, 2022

    SONiC: It’s Not Just for Switches Anymore

    By Amit Sanyal, Senior Director, Product Marketing, Marvell

    SONiC (Software for Open Networking in the Cloud) has steadily gained momentum as a cloud-scale network operating system (NOS) by offering a community-driven approach to NOS innovation. In fact, 650 Group predicts that revenue for SONiC hardware, controllers and OSs will grow from around US$2 billion today to around US$4.5 billion by 2025. 

    Those using it know that the SONiC open-source framework shortens software development cycles; and SONiC’s Switch Abstraction Interface (SAI) provides ease of porting and a homogeneous edge-to-cloud experience for data center operators. It also speeds time-to-market for OEMs bringing new systems to the market.

    The bottom line: more choice is good when it comes to building disaggregated networking hardware optimized for the cloud. Over recent years, SONiC-using cloud customers have benefited from consistent user experience, unified automation, and software portability across switch platforms, at scale.

    As the utility of SONiC has become evident, other applications are lining up to benefit from this open-source ecosystem.

    A SONiC Buffet: Extending SONiC to Storage

    SONiC capabilities in Marvell’s cloud-optimized switch silicon include high availability (HA) features, RDMA over converged ethernet (RoCE), low latency, and advanced telemetry. All these features are required to run robust storage networks.

    Here’s one use case: EBOF. The capabilities above form the foundation of Marvell’s Ethernet-Bunch-of-Flash (EBOF) storage architecture. The new EBOF architecture addresses the non-storage bottlenecks that constrain the performance of the traditional Just-a-Bunch-of-Flash (JBOF) architecture it replaces-by disaggregating storage from compute.

    EBOF architecture replaces the bottleneck components found in JBOF - CPUs, DRAM and SmartNICs - with an Ethernet switch, and it’s here that SONiC is added to the plate. Marvell has, for the first time, applied SONiC to storage, specifically for services enablement, including the NVMeoFTM (NVM Express over Fabrics) discovery controller, and out-of-band management for EBOF, using Redfish® management. This implementation is in production today on the Ingrasys ES2000 EBOF storage solution. (For more on this topic, check out thisthis, and this.)

    Marvell has now extended SONiC NOS to enable storage services, thus bringing the benefits of disaggregated open networking to the storage domain.

    OK, tasty enough, but what about compute?

    How Would You Like Your Arm Prepared?

    I prefer Arm for my control plane processing, you say. Why can’t I manage those switch-based processors using SONiC, too, you ask? You’re in luck. For the first time, SONiC is the OS for Arm-based, embedded control plane processors, specifically the control plane processors found on Marvell® Prestera® switches. SONiC-enabled Arm processing allows SONiC to run on lower-cost 1G systems, reducing the bill-of-materials, power, and total cost of ownership for both management and access switches.

    In addition to embedded processors, with the OCTEON® family, Marvell offers a smorgasbord of Arm-based processors. These can be paired with Marvell switches to bring the benefits of the Arm ecosystem to networking, including Data Processing Units (DPUs) and SmartNICs.

    By combining SONiC with Arm processors, we’re setting the table for the broad Arm software ecosystem - which will develop applications for SONiC that can benefit both cloud and enterprise customers.

    The Third Course

    So, you’ve made it through the SONiC-enabled switching and on-chip control processing courses but there’s something more you need to round out the meal. Something to reap the full benefit of your SONiC experience. PHY, of course. Whether your taste runs to copper or optical mediums; PAM or coherent modulation, Marvell provides a complete SONiC-enabled portfolio by offering SONiC with our (not baked) Alaska® Ethernet PHYs and optical modules built using Marvell DSPs.  Room for Dessert?

    Finally, by enabling SONiC across the data center and enterprise switch portfolio we’re able to bring operators the enhanced telemetry and visibility capabilities that are so critical to effective service-level validation and troubleshooting. For more information on Marvell telemetry capabilities, check out this short video:

     

    The Drive Home

    Disaggregation has lowered the barrier-to-entry for market participants - unleashing new innovations from myriad hardware and software suppliers. By making use of SONiC, network designers can readily design and build disaggregated data center and enterprise networks.

    For its part, Marvell’s goal is simple: help realize the vision of an open-source standardized network operating system and accelerate its adoption.

  • August 05, 2022

    マーベルSSDコントローラがFMS 2022ベスト・オブ・ショー・アワードで受賞

    By Kristin Hehir, Senior Manager, PR and Marketing, Marvell


    FMS best of show award

    Flash Memory Summit (FMS), the industry’s largest conference featuring data storage and memory technology solutions, presented its 2022 Best of Show Awards at a ceremony held in conjunction with this week’s event. Marvell was named a winner alongside Exascend for the collaboration of Marvell’s edge and client SSD controller with Exascend’s high-performance memory card.

    Honored as the “Most Innovative Flash Memory Consumer Application,” the Exascend Nitro CFexpress card powered by Marvell’s PCIe® Gen 4, 4-NAND channel 88SS1321 SSD controller enables digital storage of ultraHD video and photos in extreme temperature environments where ruggedness, endurance and reliability are critical. The Nitro CFexpress card is unique in controller, hardware and firmware architecture in that it combines Marvell’s 12nm process node, low-power, compact form factor SSD controller with Exascend’s innovative hardware design and Adaptive Thermal Control™ technology.

    The Nitro card is the highest capacity VPG 400 CFexpress card on the market, with up to 1 TB of storage, and is certified VPG400 by the CompactFlash® Association using its stringent Video Performance Guarantee Profile 4 (VPG400) qualification. Marvell’s 88SS1321 controller helps drive the Nitro card’s 1,850 MB/s of sustained read and 1,700 MB/sustained write for ultimate performance.

    “Consumer applications, such as high-definition photography and video capture using professional photography and cinema cameras, require the highest performance from their storage solution. They also require the reliability to address the dynamics of extreme environmental conditions, both indoors and outdoors,” said Jay Kramer, Chairman of the Awards Program and President of Network Storage Advisors Inc. “We are proud to recognize the collaboration of Marvell’s SSD controllers with Exascend’s memory cards, delivering 1,850 MB/s of sustained read and 1,700 MB/s sustained write for ultimate performance addressing the most extreme consumer workloads. Additionally, Exascend’s Adaptive Thermal Control™ technology provides an IP67 certified environmental hardening that is dustproof, water resistant and tackles the issue of overheating and thermal throttling.”

    More information on the 2022 Flash Memory Summit Best of Show Award Winners can be found here.

  • December 06, 2021

    マーベルとIngrasys、データセンターのEBOFでCephクラスタを強化するために協業

    By Khurram Malik, Senior Manager, Technical Marketing, Marvell

    A massive amount of data is being generated at the edge, data center and in the cloud, driving scale out Software-Defined Storage (SDS) which, in turn, is enabling the industry to modernize data centers for large scale deployments. Ceph is an open-source, distributed object storage and massively scalable SDS platform, contributed to by a wide range of major high-performance computing (HPC) and storage vendors. Ceph BlueStore back-end storage removes the Ceph cluster performance bottleneck, allowing users to store objects directly on raw block devices and bypass the file system layer, which is specifically critical in boosting the adoption of NVMe SSDs in the Ceph cluster. Ceph cluster with EBOF provides a scalable, high-performance and cost-optimized solution and is a perfect use case for many HPC applications. Traditional data storage technology leverages special-purpose compute, networking, and storage hardware to optimize performance and requires proprietary software for management and administration. As a result, IT organizations neither scale-out nor make it feasible to deploy petabyte or exabyte data storage from a CAPEX and OPEX perspective.
    Ingrasys (subsidiary of Foxconn) is collaborating with Marvell to introduce an Ethernet Bunch of Flash (EBOF) storage solution which truly enables scale-out architecture for data center deployments. EBOF architecture disaggregates storage from compute and provides limitless scalability, better utilization of NVMe SSDs, and deploys single-ported NVMe SSDs in a high-availability configuration on an enclosure level with no single point of failure.

    Power Ceph Cluster with EBOF in Data Centers image 1

    Ceph is deployed on commodity hardware and built on multi-petabyte storage clusters. It is highly flexible due to its distributed nature. EBOF use in a Ceph cluster enables added storage capacity to scale up and scale out at an optimized cost and facilitates high-bandwidth utilization of SSDs. A typical rack-level Ceph solution includes a networking switch for client, and cluster connectivity; a minimum of 3 monitor nodes per cluster for high availability and resiliency; and Object Storage Daemon (OSD) host for data storage, replication, and data recovery operations. Traditionally, Ceph recommends 3 replicas at a minimum to distribute copies of the data and assure that the copies are stored on different storage nodes for replication, but this results in lower usable capacity and consumes higher bandwidth. Another challenge is that data redundancy and replication are compute-intensive and add significant latency. To overcome all these challenges, Ingrasys has introduced a more efficient Ceph cluster rack developed with management software – Ingrasys Composable Disaggregate Infrastructure (CDI) Director.

  • January 11, 2021

    Industry’s First NVMe Boot Device for HPE® ProLiant® and HPE Apollo Servers Delivers Simple, Secure and Reliable Boot Solution based on Innovative Technology from Marvell

    By Todd Owens, Field Marketing Director, Marvell

    Today, operating systems (OSs) like VMware recommend that OS data be kept completely separated from user data using non-network RAID storage. This is a best practice for any virtualized operating system including VMware, Microsoft Azure Stack HCI (Storage Spaces Direct) and Linux. Thanks to innovative flash memory technology from Marvell, a new secure, reliable and easy-to-use OS boot solution is now available for Hewlett Packard Enterprise (HPE) servers.

    While there are 32GB micro-SD or USB boot device options available today with VMware requiring as much as 128GB of storage for the OS and Microsoft Storage Spaces Direct needing 200GB — these solutions simply don’t have the storage capacity needed. Using hardware RAID controllers and disk drives in the server bays is another option. However, this adds significant cost and complexity to a server configuration just to meet the OS requirement. The proper solution to address separating the OS from user data is the HPE NS204i-p NVME OS Boot Device.

  • November 12, 2020

    フラッシュ・メモリ・サミット、マーベルを2020年ベスト・オブ・ショー・アワードに選出

    By Lindsey Moore, Marketing Coordinator, Marvell

    Marvell wins FMS Award for Most Innovative Technology

    Flash Memory Summit, the industry's largest trade show dedicated to flash memory and solid-state storage technology, presented its 2020 Best of Show Awards yesterday in a virtual ceremony. Marvell, alongside Hewlett Packard Enterprise (HPE), was named a winner for "Most Innovative Flash Memory Technology" in the controller/system category for the Marvell NVMe RAID accelerator in the HPE OS Boot Device.

    Last month, Marvell introduced the industry’s first native NVMe RAID 1 accelerator, a state-of-the-art technology for virtualized, multi-tenant cloud and enterprise data center environments which demand optimized reliability, efficiency, and performance. HPE is the first of Marvell's partners to support the new accelerator in the HPE NS204i-p NVMe OS Boot Device offered on select HPE ProLiant servers and HPE Apollo systems. The solution lowers data center total cost of ownership (TCO) by offloading RAID 1 processing from costly and precious server CPU resources, maximizing application processing performance.

  • October 27, 2020

    NVMe RAIDでより優れたゲーム体験を実現

    By Shahar Noy, Senior Director, Product Marketing

    You are an avid gamer. You spend countless hours in forums to decide between the ASUS TUF components and researching Radeon RX 500 or GeForce RTX 20, to ensure games would show at their best on your hard-earned PC gaming rig. You made your selection and can’t stop bragging about your system’s ray tracing capabilities and how realistic is the “Forza Motorsport 7” view from your McLaren F1 GT cockpit when you drive through the legendary Le Mans circuit at dusk. You are very proud of your machine and the year 2020 is turning out to be good: Microsoft finally launched the gorgeous looking “Flight Simulator 2020,” and CD Projekt just announced that the beloved and award-winning “The Witcher 3” is about to get an upgrade to take advantage of the myriad of hardware updates available to serious gamers like you. You have your dream system in hand and life can’t be better.

  • August 03, 2018

    インフラ強国: マーベルとキャビウムが一つに!

    By Todd Owens, Field Marketing Director, Marvell

    Marvell and Cavium

    Marvell’s acquisition of Cavium closed on July 6th, 2018 and the integration is well under way. Cavium becomes a wholly-owned subsidiary of Marvell. Our combined mission as Marvell is to develop and deliver semiconductor solutions that process, move, store and secure the world’s data faster and more reliably than anyone else. The combination of the two companies makes for an infrastructure powerhouse, serving a variety of customers in the Cloud/Data Center, Enterprise/Campus, Service Providers, SMB/SOHO, Industrial and Automotive industries.

    infrastructure powerhouse

    For our business with HPE, the first thing you need to know is it is business as usual. The folks you engaged with on I/O and processor technology we provided to HPE before the acquisition are the same you engage with now.  Marvell is a leading provider of storage technologies, including ultra-fast read channels, high performance processors and transceivers that are found in the vast majority of hard disk drive (HDD) and solid-state drive (SDD) modules used in HPE ProLiant and HPE Storage products today. 

    Our industry leading QLogic® 8/16/32Gb Fibre Channel and FastLinQ® 10/20/25/50Gb Ethernet I/O technology will continue to provide connectivity for HPE Server and Storage solutions. The focus for these products will continue to be the intelligent I/O of choice for HPE, with the performance, flexibility, and reliability we are known for.

       

    Marvell’s Portfolio of FastLinQ Ethernet and QLogic Fibre Channel I/O Adapters 

    We will continue to provide ThunderX2® Arm® processor technology for HPC servers like the HPE Apollo 70 for high-performance compute applications. We will also continue to provide Ethernet networking technology that is embedded into HPE Servers and Storage today and Marvell ASIC technology used for the iLO5 baseboard management controller (BMC) in all HPE ProLiant and HPE Synergy Gen10 servers.

      iLO 5 for HPE ProLiant Gen10 is deployed on Marvell SoCs

    iLO 5 for HPE ProLiant Gen10 is deployed on Marvell SoCs 

    That sounds great, but what’s going to change over time?

    The combined company now has a much broader portfolio of technology to help HPE deliver best-in-class solutions at the edge, in the network and in the data center. 

    Marvell has industry-leading switching technology from 1GbE to 100GbE and beyond. This enables us to deliver connectivity from the IoT edge, to the data center and the cloud. Our Intelligent NIC technology provides compression, encryption and more to enable customers to analyze network traffic faster and more intelligently than ever before. Our security solutions and enhanced SoC and Processor capabilities will help our HPE design-in team collaborate with HPE to innovate next-generation server and storage solutions.

    Down the road, you’ll see a shift in our branding and where you access info over time as well. While our product-specific brands, like ThunderX2 for Arm, or QLogic for Fibre Channel and FastLinQ for Ethernet will remain, many things will transition from Cavium to Marvell. Our web-based resources will start to change as will our email addresses. For example, you can now access our HPE Microsite at www.marvell.com/hpe . Soon, you’ll be able to contact us at “hpesolutions@marvell.com” as well. The collateral you leverage today will be updated over time. In fact, this has already started with updates to our HPE-specific Line Card, our HPE Ethernet Quick Reference Guide, our Fibre Channel Quick Reference Guides and our presentation materials. Updates will continue over the next few months.

    In summary, we are bigger and better. We are one team that is more focused than ever to help HPE, their partners and customers thrive with world-class technology we can bring to bear. If you want to learn more, engage with us today. Our field contact details are here. We are all excited for this new beginning to make “I/O and Infrastructure Matter!” each and every day.

  • April 05, 2018

    VMware vSAN ReadyNode のレシピは置換を使用できる。

    By Todd Owens, Field Marketing Director, Marvell

    VMware vSAN ReadyNode Recipes Can Use Substitutions When you are baking a cake, at times you substitute in different ingredients to make the result better. The same can be done with VMware vSAN ReadyNode configurations or recipes. Some changes to the documented configurations can make the end solution much more flexible and scalable. VMware allows certain elements within a vSAN ReadyNode bill of materials (BOM) to be substituted. In this VMware BLOG, the author outlines that server elements in the bom can change including:
    • CPU
    • メモリ
    • Caching Tier
    • Capacity Tier
    • NIC
    • Boot Device
    However, changes can only be made with devices that are certified as supported by VMware. The list of certified I/O devices can be found on VMware vSAN Compatibility Guide and the full portfolio of NICs, FlexFabric Adapters and Converged Network Adapters form HPE and Cavium are supported. If we zero in on the HPE recipes for vSAN ReadyNode configurations, here are the substitutions you can make for I/O adapters. Ok, so we know what substitutions we can make in these vSAN storage solutions. What are the benefits to the customer for making this change? There are several benefits to the HPE/Cavium technology compared to the other adapter offerings.
    • HPE 520/620 Series adapters support Universal RDMA – the ability to support both RoCE and IWARP RDMA protocols with the same adapter.
      • Why Does This Matter? Universal RDMA offers flexibility in choice when low-latency is a requirement. RoCE works great if customers have already deployed using lossless Ethernet infrastructure. iWARP is a great choice for greenfield environments as it works on existing networks, doesn’t require complexity of lossless Ethernet and thus scales infinitely better.
    • Concurrent Network Partitioning (NPAR) and SR-IOV
      • NPAR (Network Partitioning) allows for virtualization of the physical adapter port. SR-IOV Offloadmove management of the VM network from the Hypervisor (CPU) to the Adapter. With HPE/Cavium adapters, these two technologies can work together to optimize the connectivity for virtual server environments and offload the Hypervisor (and thus CPU) from managing VM traffic, while providing full Quality of Service at the same time.
    • Storage Offload
      • Ability to reduce CPU utilization by offering iSCSI or FCoE Storage offload on the adapter itself. The freed-up CPU resources can then be used for other, more critical tasks and applications. This also reduces the need for dedicated storage adapters, connectivity hardware and switches, lowering overall TCO for storage connectivity.
    • Offloads in general – In addition to RDMA, Storage and SR-IOV Offloads mentioned above, HPE/Cavium Ethernet adapters also support TCP/IP Stateless Offloads and DPDK small packet acceleration offloads as well. Each of these offloads moves work from the CPU to the adapter, reducing the CPU utilization associated with I/O activity. As mentioned in my previous blog, because these offloads bypass tasks in the O/S Kernel, they also mitigate any performance issues associated with Spectre/Meltdown vulnerability fixes on X86 systems.
    • Adapter Management integration with vCenter – All HPE/Cavium Ethernet adapters are managed by Cavium’s QCC utility which can be fully integrated into VMware v-Center. This provides a much simpler approach to I/O management in vSAN configurations.
    In summary, if you are looking to deploy vSAN ReadyNode, you might want to fit in a substitution or two on the I/O front to take advantage of all the intelligent capabilities available in Ethernet I/O adapters from HPE/Cavium. Sure, the standard ingredients work, but the right substitution will make things more flexible, scalable and deliver an overall better experience for your client.
  • March 02, 2018

    Connecting Shared Storage – iSCSI or Fibre Channel

    By Todd Owens, Field Marketing Director, Marvell

    At Cavium, we provide adapters that support a variety of protocols for connecting servers to shared storage including iSCSI, Fibre Channel over Ethernet (FCoE) and native Fibre Channel (FC). One of the questions we get quite often is which protocol is best for connecting servers to shared storage? The answer is, it depends. 

    We can simplify the answer by eliminating FCoE, as it has proven to be a great protocol for converging the edge of the network (server to top-of-rack switch), but not really effective for multi-hop connectivity, taking servers through a network to shared storage targets. That leaves us with iSCSI and FC. 

    Typically, people equate iSCSI with lower cost and ease of deployment because it works on the same kind of Ethernet network that servers and clients are already running on. These same folks equate FC as expensive and complex, requiring special adapters, switches and a “SAN Administrator” to make it all work. 

    This may have been the case in the early days of shared storage, but things have changed as the intelligence and performance of the storage network environment has evolved. What customers need to do is look at the reality of what they need from a shared storage environment and make a decision based on cost, performance and manageability. For this blog, I’m going to focus on these three criteria and compare 10Gb Ethernet (10GbE) with iSCSI hardware offload and 16Gb Fibre Channel (16GFC). 

    Before we crunch numbers, let me start by saying that shared storage requires a dedicated network, regardless of the protocol. The idea that iSCSI can be run on the same network as the server and client network traffic may be feasible for small or medium environments with just a couple of servers, but for any environment with mission-critical applications or with say four or more servers connecting to a shared storage device, a dedicated storage network is strongly advised to increase reliability and eliminate performance problems related to network issues. 

    Now that we have that out of the way, let’s start by looking at the cost difference between iSCSI and FC. We have to take into account the costs of the adapters, optics/cables and switch infrastructure. Here’s the list of Hewlett Packard Enterprise (HPE) components I will use in the analysis. All prices are based on published HPE list prices.

      list of Hewlett Packard Enterprise (HPE) component pricesNotes: 1. Optical transceiver needed at both adapter and switch ports for 10GbE networks. Thus cost/port is two times the transceiver cost 2. FC switch pricing includes full featured management software and licenses 3. FC Host Bus Adapters (HBAs) ship with transceivers, thus only one additional transceiver is needed for the switch port 

    So if we do the math, the cost per port looks like this: 

    10GbE iSCSI with SFP+ Optics = $437+$2,734+$300 = $3,471 

    10GbE iSCSI with 3 meter Direct Attach Cable (DAC) =$437+$269+300 = $1,006 

    16GFC with SFP+ Optics = $773 + $405 + $1,400 = $2,578 

    So iSCSI is the lowest price if DAC cables are used. Note, in my example, I chose 3 meter cable length, but even if you choose shorter or longer cables (HPE supports from 0.65 to 7 meter cable lengths), this is still the lowest cost connection option. Surprisingly, the cost of the 10GbE optics makes the iSCSI solution with optical connections the most expensive configuration. When using fiber optic cables, the 16GFC configuration is lower cost. 

    So what are the trade-offs with DAC versus SFP+ options? It really comes down to distance and the number of connections required. The DAC cables can only span up to 7 meters or so. That means customers have only limited reach within or across racks. If customers have multiple racks or distance requirements of more than 7 meters, FC becomes the more attractive option, from a cost perspective. Also, DAC cables are bulky, and when trying to cable more than 10 ports or more, the cable bundles can become unwieldy to deal with. 

    On the performance side, let’s look at the differences. iSCSI adapters have impressive specifications of 10Gbps bandwidth and 1.5Million IOPS which offers very good performance. For FC, we have 16Gbps of bandwidth and 1.3Million IOPS. So FC has more bandwidth and iSCSI can deliver slightly more transactions. Well, that is, if you take the specifications at face value. If you dig a little deeper here’s some things we learn:

    • 16GFC delivers full line-rate performance for block storage data transfers. Today’s 10GbE iSCSI runs on the Ethernet protocol with Data Center Bridging (DCB) which makes this a lossless transmission protocol like FC. However the iSCSI commands are transferred via Transmission Control Protocol (TCP)/IP which add significant overhead to the headers of each packet. Because of this inefficiency, the actual bandwidth for iSCSI traffic is usually well below the stated line rate. This gives16GFC the clear advantage in terms of bandwidth performance.
    • iSCSI provides the best IOPS performance for block sizes below 2K. Figure 1 shows IOPS performance of Cavium iSCSI with hardware offload. Figure 2 shows IOPS performance of Cavium’s QLogic 16GFC adapter and you can see better IOPS performance for 4K and above, when compared to iSCSI.
    • Latency is an order of magnitude lower for FC compared to iSCSI. Latency of Brocade Gen 5 (16Gb) FC switching (using cut-through switch architecture) is in the 700 nanoseconds range and for 10GbE it is in the range of 5 to 50 microseconds. The impact of latency gets compounded with iSCSI should the user implement 10GBASE-T connections in the iSCSI adapters. This adds another significant hit to the latency equation for iSCSI.

    図 1:Cavium’s iSCSI Hardware Offload IOPS Performance  

    図 2:Cavium’s QLogic 16Gb FC IOPS performance 

    If we look at manageability, this is where things have probably changed the most. Keep in mind, Ethernet network management hasn’t really changed much. Network administrators create virtual LANs (vLANs) to separate network traffic and reduce congestion. These network administrators have a variety of tools and processes that allow them to monitor network traffic, run diagnostics and make changes on the fly when congestion starts to impact application performance. The same management approach applies to the iSCSI network and can be done by the same network administrators. 

    On the FC side, companies like Cavium and HPE have made significant improvements on the software side of things to simplify SAN deployment, orchestration and management. Technologies like fabric-assigned port worldwide name (FA-WWN) from Cavium and Brocade enable the SAN administrator to configure the SAN without having HBAs available and allow a failed server to be replaced without having to reconfigure the SAN fabric. Cavium and Brocade have also teamed up to improve the FC SAN diagnostics capability with Gen 5 (16Gb) Fibre Channel fabrics by implementing features such as Brocade ClearLink™ diagnostics, Fibre Chanel Ping (FC ping) and Fibre Channel traceroute (FC traceroute), link cable beacon (LCB) technology and more. HPE’s Smart SAN for HPE 3PAR provides the storage administrator the ability to zone the fabric and map the servers and LUNs to an HPE 3PAR StoreServ array from the HPE 3PAR StoreServ management console. 

    Another way to look at manageability is in the number of administrators on staff. In many enterprise environments, there are typically dozens of network administrators. In those same environments, there may be less than a handful of “SAN” administrators. Yes, there are lots of LAN connected devices that need to be managed and monitored, but so few for SAN connected devices. The point is it doesn’t take an army to manage a SAN with today’s FC management software from vendors like Brocade. 

    So what is the right answer between FC and iSCSI? Well, it depends. If application performance is the biggest criteria, it’s hard to beat the combination of bandwidth, IOPS and latency of the 16GFC SAN. If compatibility and commonality with existing infrastructures is a critical requirement, 10GbE iSCSI is a good option (assuming the 10GbE infrastructure exists in the first place). If security is a key concern, FC is the best choice. When is the last time you heard of a FC network being hacked into? And if cost is the key criteria, iSCSI with DAC or 10GBASE-T connection is a good choice, understanding the tradeoff in latency and bandwidth performance. 

    So in very general terms, FC is the best choice for enterprise customers who need high performance, mission-critical capability, high reliability and scalable shared storage connectivity. For smaller customers who are more cost sensitive, iSCSI is a great alternative. iSCSI is also a good protocol for pre-configure systems like hyper-converged storage solutions to provide simple connectivity to existing infrastructure. 

    As a wise manager once told me many years ago, “If you start with the customer and work backwards, you can’t go wrong.” So the real answer is understand what the customer needs and design the best solution to meet those needs based on the information above.

  • January 11, 2018

    世界中のデータを保管します

    By Marvell PR Team

    Storage is the foundation for a data-centric world, but how tomorrow’s data will be stored is the subject of much debate. What is clear is that data growth will continue to rise significantly. According to a report compiled by IDC titled ‘Data Age 2025’, the amount of data created will grow at an almost exponential rate. This amount is predicted to surpass 163 Zettabytes by the middle of the next decade (which is almost 8 times what it is today, and nearly 100 times what it was back in 2010). Increasing use of cloud-based services, the widespread roll-out of Internet of Things (IoT) nodes, virtual/augmented reality applications, autonomous vehicles, machine learning and the whole ‘Big Data’ phenomena will all play a part in the new data-driven era that lies ahead. 

    Further down the line, the building of smart cities will lead to an additional ramp up in data levels, with highly sophisticated infrastructure being deployed in order to alleviate traffic congestion, make utilities more efficient, and improve the environment, to name a few. A very large proportion of the data of the future will need to be accessed in real-time. This will have implications on the technology utilized and also where the stored data is situated within the network. Additionally, there are serious security considerations that need to be factored in, too. 

    So that data centers and commercial enterprises can keep overhead under control and make operations as efficient as possible, they will look to follow a tiered storage approach, using the most appropriate storage media so as to lower the related costs. Decisions on the media utilized will be based on how frequently the stored data needs to be accessed and the acceptable degree of latency. This will require the use of numerous different technologies to make it fully economically viable - with cost and performance being important factors. 

    There are now a wide variety of different storage media options out there. In some cases these are long established while in others they are still in the process of emerging. Hard disk drives (HDDs) in certain applications are being replaced by solid state drives (SSDs), and with the migration from SATA to NVMe in the SSD space, NVMe is enabling the full performance capabilities of SSD technology. HDD capacities are continuing to increase substantially and their overall cost effectiveness also adds to their appeal. The immense data storage requirements that are being warranted by the cloud mean that HDD is witnessing considerable traction in this space.

    There are other forms of memory on the horizon that will help to address the challenges that increasing storage demands will set. These range from higher capacity 3D stacked flash to completely new technologies, such as phase-change with its rapid write times and extensive operational lifespan. The advent of NVMe over fabrics (NVMf) based interfaces offers the prospect of high bandwidth, ultra-low latency SSD data storage that is at the same time extremely scalable. 

    Marvell was quick to recognize the ever growing importance of data storage and has continued to make this sector a major focus moving forwards, and has established itself as the industry’s leading supplier of both HDD controllers and merchant SSD controllers.

    Within a period of only 18 months after its release, Marvell managed to ship over 50 million of its 88SS1074 SATA SSD controllers with NANDEdge™ error-correction technology. Thanks to its award-winning 88NV11xx series of small form factor DRAM-less SSD controllers (based on a 28nm CMOS semiconductor process), the company is able to offer the market high performance NVMe memory controller solutions that are optimized for incorporation into compact, streamlined handheld computing equipment, such as tablet PCs and ultra-books. These controllers are capable of supporting reads speeds of 1600MB/s, while only drawing minimal power from the available battery reserves. Marvell offers solutions like its 88SS1092 NVMe SSD controller designed for new compute models that enable the data center to share storage data to further maximize cost and performance efficiencies. 

    The unprecedented growth in data means that more storage will be required. Emerging applications and innovative technologies will drive new ways of increasing storage capacity, improving latency and ensuring security. Marvell is in a position to offer the industry a wide range of technologies to support data storage requirements, addressing both SSD or HDD implementation and covering all accompanying interface types from SAS and SATA through to PCIe and NMVe. Marvell storing data Check out www.marvell.com to learn more about how Marvell is storing the world’s data.

  • December 13, 2017

    マーベルの NVMe DRAM レス SSD コントローラーが 2017 ACE アワードを受賞

    By Sander Arts, Interim VP of Marketing, Marvell

    ACE Awards logo Key representatives of the global technology sector were gathered together at the San Jose Convention Center last week to hear the recipients of this year’s Annual Creativity in Electronics (ACE) Awards announced. This prestigious awards event, which is organized in conjunction with leading electronics engineering magazines EDN and EE Times, highlights the most innovative products announced in the last 12 months, as well as recognizing visionary executives and the most promising new start-ups. A panel made up of the editorial teams of these magazines, plus several highly respected independent judges, were all involved in the process of selecting the winner in each category. 88NV1160 controller for non-volatile memory express The 88NV1160 high performance controller for non-volatile memory express (NVMe), which was introduced by Marvell earlier this year, fought off tough competition from companies like Diodes Inc. and Cypress Semiconductor to win the coveted Logic/Interface/Memory category. Marvell gained two further nominations at the awards - with 98PX1012 Prestera PX Passive Intelligent Port Extender (PIPE) also being featured in the Logic/Interface/Memory category, while the 88W8987xA automotive wireless combo SoC was among those cited in the Automotive category. 

    Designed for inclusion in the next generation of streamlined portable computing devices (such as high-end tablets and ultra-books), the 88NV1160 NVMe solid-state drive (SSD) controllers are able to deliver 1600MB/s read speeds while simultaneously keeping the power consumption required for such operations extremely low (<1.2W). Based on a 28nm low power CMOS process, each of these controller ICs has a dual core 400MHz Arm® Cortex®-R5 processor embedded into it. 

    Through incorporation of a host memory buffer, the 88NV1160 exhibits far lower latency than competing devices. It is this that is responsible for accelerating the read speeds supported. By utilizing its embedded SRAM, the controller does not need to rely on an external DRAM memory - thereby simplifying the memory controller implementation. As a result, there is a significant reduction in the board space required, as well as a lowering of the overall bill-of-materials costs involved. 

    The 88NV1160’s proprietary NANDEdge™ low density parity check error-correction functionality raises SSD endurance and makes sure that long term system reliability is upheld throughout the end product’s entire operational lifespan. The controller’s built-in 256-bit AES encryption engine ensures that stored metadata is safeguarded from potential security breaches. Furthermore, these DRAM-less ICs are very compact, thus enabling multiple-chip package integration to be benefitted from. 

    Consumers are now expecting their portable electronics equipment to possess a lot more computing resource, so that they can access the exciting array of new software apps that are now becoming available; making use of cloud-based services, enjoying augmented reality and gaming. At the same time as offering functions of this kind, such items of equipment need to be able to support longer periods between battery recharges, so as to further enhance the user experience derived. This calls for advanced ICs combining strong processing capabilities with improved power efficiency levels and that is where the 88NV1160 comes in.ACE 2017 Award "We're excited to honor this robust group for their dedication to their craft and efforts in bettering the industry for years to come," said Nina Brown, Vice President of Events at UBM Americas. "The judging panel was given the difficult task of selecting winners from an incredibly talented group of finalists and we'd like to thank all of those participants for their amazing work and also honor their achievements. These awards aim to shine a light on the best in today's electronics realm and this group is the perfect example of excellence within both an important and complex industry."  

  • October 17, 2017

    NVMe を使用するフラッシュ・ストレージのポテンシャルを引き出す

    By Jeroen Dorgelo, Director of Strategy, Storage Group, Marvell

    The dirty little secret of flash drives today is that many of them are running on yesterday’s interfaces. While SATA and SAS have undergone several iterations since they were first introduced, they are still based on decades-old concepts and were initially designed with rotating disks in mind. These legacy protocols are bottlenecking the potential speeds possible from today’s SSDs. 

    NVMe is the latest storage interface standard designed specifically for SSDs. With its massively parallel architecture, it enables the full performance capabilities of today’s SSDs to be realized. Because of price and compatibility, NVMe has taken a while to see uptake, but now it is finally coming into its own. 

    Serial Attached Legacy 

    Currently, SATA is the most common storage interface. Whether a hard drive or increasingly common flash storage, chances are it is running through a SATA interface. The latest generation of SATA - SATA III – has a 600 MB/s bandwidth limit. While this is adequate for day-to-day consumer applications, it is not enough for enterprise servers. Even I/O intensive consumer use cases, such as video editing, can run into this limit. 

    The SATA standard was originally released in 2000 as a serial-based successor to the older PATA standard, a parallel interface. SATA uses the advanced host controller interface (AHCI) which has a single command queue with a depth of 32 commands. This command queuing architecture is well-suited to conventional rotating disk storage, though more limiting when used with flash. 

    Whereas SATA is the standard storage interface for consumer drives, SAS is much more common in the enterprise world. Released originally in 2004, SAS is also a serial replacement to an older parallel standard SCSI. Designed for enterprise applications, SAS storage is usually more expensive to implement than SATA, but it has significant advantages over SATA for data center use - such as longer cable lengths, multipath IO, and better error reporting. SAS also has a higher bandwidth limit of 1200MB/s. 

    Just like SATA, SAS, has a single command queue, although the queue depth of SAS goes to 254 commands instead of 32 commands. While the larger command queue and higher bandwidth limit make it better performing than SATA, SAS is still far from being the ideal flash interface. 

    NVMe - Massive Parallelism 

    Introduced in 2011, NVMe was designed from the ground up for addressing the needs of flash storage. Developed by a consortium of storage companies, its key objective is specifically to overcome the bottlenecks on flash performance imposed by SATA and SAS. 

    Whereas SATA is restricted to 600MB/s and SAS to 1200MB/s (as mentioned above), NVMe runs over the PCIe bus and its bandwidth is theoretically limited only by the PCIe bus speed. With current PCIe standards providing 1GB/s or more per lane, and PCIe connections generally offering multiple lanes, bus speed almost never represents a bottleneck for NVMe-based SSDs. 

    NVMe is designed to deliver massive parallelism, offering 64,000 command queues, each with a queue depth of 64,000 commands. This parallelism fits in well with the random access nature of flash storage, as well as the multi-core, multi-threaded processors in today’s computers. NVMe’s protocol is streamlined, with an optimized command set that does more in fewer operations compared to AHCI. IO operations often need fewer commands than with SATA or SAS, allowing latency to be reduced. For enterprise customers, NVMe also supports many enterprise storage features, such as multi-path IO and robust error reporting and management. 

    Pure speed and low latency, plus the ability to deal with high IOPs have made NVMe SSDs a hit in enterprise data centers. Companies that particularly value low latency and high IOPs, such as high-frequency trading firms and  database and web application hosting companies, have been some of the first and most avid endorsers of NVMe SSDs. 

    Barriers to Adoption 

    While NVMe is high performance, historically speaking it has also been considered relatively high cost. This cost has negatively affected its popularity in the consumer-class storage sector. Relatively few operating systems supported NVMe when it first came out, and its high price made it less attractive for ordinary consumers, many of whom could not fully take advantage of its faster speeds anyway. 

    However, all this is changing. NVMe prices are coming down and, in some cases, achieving price parity with SATA drives. This is due not only to market forces but also to new innovations, such as DRAM-less NVMe SSDs. 

    As DRAM is a significant bill of materials (BoM) cost for SSDs, DRAM-less SSDs are able to achieve lower, more attractive price points. Since NVMe 1.2, host memory buffer (HMB) support has allowed DRAM-less SSDs to borrow host system memory as the SSD’s DRAM buffer for better performance. DRAM-less SSDs that take advantage of HMB support can achieve performance similar to that of DRAM-based SSDs, while simultaneously saving cost, space and energy. 

    NVMe SSDs are also more power-efficient than ever. While the NVMe protocol itself is already efficient, the PCIe link it runs over can consume significant levels of idle power. Newer NVMe SSDs support highly efficient, autonomous sleep state transitions, which allow them to achieve energy consumption on par or lower than SATA SSDs. 

    All this means that NVMe is more viable than ever for a variety of use cases, from large data centers that can save on capital expenditures due to lower cost SSDs and operating expenditures as a result of lower power consumption, as well as power-sensitive mobile/portable applications such as laptops, tablets and smartphones, which can now consider using NVMe. 

    Addressing the Need for Speed 

    While the need for speed is well recognized in enterprise applications, is the speed offered by NVMe actually needed in the consumer world? For anyone who has ever installed more memory, bought a larger hard drive (or SSD), or ordered a faster Internet connection, the answer is obvious. 

    Today’s consumer use cases generally do not yet test the limits of SATA drives, and part of the reason is most likely because SATA is still the most common interface for consumer storage. Today’s video recording and editing, gaming and file server applications are already pushing the limits of consumer SSDs, and tomorrow’s use cases are only destined to push them further. With NVMe now achieving price points that are comparable with SATA, there is no reason not to build future-proof storage today.

  • August 31, 2017

    ハードウェア暗号化によって組込み型ストレージを保護

    By Jeroen Dorgelo, Director of Strategy, Storage Group, Marvell

    For industrial, military and a multitude of modern business applications, data security is of course incredibly important. While software based encryption often works well for consumer and some enterprise environments, in the context of the embedded systems used in industrial and military applications, something that is of a simpler nature and is intrinsically more robust is usually going to be needed. 

    Self encrypting drives utilize on-board cryptographic processors to secure data at the drive level. This not only increases drive security automatically, but does so transparently to the user and host operating system. By automatically encrypting data in the background, they thus provide the simple to use, resilient data security that is required by embedded systems. 

    Embedded vs Enterprise Data Security 

    Both embedded and enterprise storage often require strong data security. Depending on the industry sectors involved this is often related to the securing of customer (or possibly patient) privacy, military data or business data. However that is where the similarities end. Embedded storage is often used in completely different ways from enterprise storage, thereby leading to distinctly different approaches to how data security is addressed. 

    Enterprise storage usually consists of racks of networked disk arrays in a data center, while embedded storage is often simply a solid state drive (SSD) installed into an embedded computer or device. The physical security of the data center can be controlled by the enterprise, and software access control to enterprise networks (or applications) is also usually implemented. Embedded devices, on the other hand - such as tablets, industrial computers, smartphones, or medical devices - are often used in the field, in what are comparatively unsecure environments. Data security in this context has no choice but to be implemented down at the device level. 

    Hardware Based Full Disk Encryption 

    For embedded applications where access control is far from guaranteed, it is all about securing the data as automatically and transparently as possible. 

    Full disk, hardware based encryption has shown itself to be the best way of achieving this goal. Full disk encryption (FDE) achieves high degrees of both security and transparency by encrypting everything on a drive automatically. Whereas file based encryption requires users to choose files or folders to encrypt, and also calls for them to provide passwords or keys to decrypt them, FDE works completely transparently. All data written to the drive is encrypted, yet, once authenticated, a user can access the drive as easily as an unencrypted one. This not only makes FDE much easier to use, but also means that it is a more reliable method of encryption, as all data is automatically secured. Files that the user forgets to encrypt or doesn’t have access to (such as hidden files, temporary files and swap space) are all nonetheless automatically secured. 

    While FDE can be achieved through software techniques, hardware based FDE performs better, and is inherently more secure. Hardware based FDE is implemented at the drive level, in the form of a self encrypting SSD. The SSD controller contains a hardware cryptographic engine, and also stores private keys on the drive itself. 

    Because software based FDE relies on the host processor to perform encryption, it is usually slower - whereas hardware based FDE has much lower overhead as it can take advantage of the drive’s integrated crypto-processor. Hardware based FDE is also able to encrypt the master boot record of the drive, which conversely software based encryption is unable to do. 

    Hardware centric FDEs are transparent to not only the user, but also the host operating system. They work transparently in the background and no special software is needed to run them. Besides helping to maximize ease of use, this also means sensitive encryption keys are kept separate from the host operating system and memory, as all private keys are stored on the drive itself. 

    Improving Data Security 

    Besides providing the transparent, easy to use encryption that is now being sought, hardware- based FDE also has specific benefits for data security in modern SSDs. NAND cells have a finite service life and modern SSDs use advanced wear leveling algorithms to extend this as much as possible. Instead of overwriting the NAND cells as data is updated, write operations are constantly moved around a drive, often resulting in multiple copies of a piece of data being spread across an SSD as a file is updated. This wear leveling technique is extremely effective, but it makes file based encryption and data erasure much more difficult to accomplish, as there are now multiple copies of data to encrypt or erase. 

    FDE solves both these encryption and erasure issues for SSDs. Since all data is encrypted, there are not any concerns about the presence of unencrypted data remnants. In addition, since the encryption method used (which is generally 256-bit AES) is extremely secure, erasing the drive is as simple to do as erasing the private keys. 

    Solving Embedded Data Security 

    Embedded devices often present considerable security challenges to IT departments, as these devices are often used in uncontrolled environments, possibly by unauthorized personnel. Whereas enterprise IT has the authority to implement enterprise wide data security policies and access control, it is usually much harder to implement these techniques for embedded devices situated in industrial environments or used out in the field. 

    The simple solution for data security in embedded applications of this kind is hardware based FDE. Self encrypting drives with hardware crypto-processors have minimal processing overhead and operate completely in the background, transparent to both users and host operating systems. Their ease of use also translates into improved security, as administrators do not need to rely on users to implement security policies, and private keys are never exposed to software or operating systems.

  • March 08, 2017

     NVMe ベースのワークファブリックは、データセンターにおける従来の回転型メディアの限界を克服: NVMe  SSD 共有ストレージのスピードとコストの利点は、第2世代に発展しました。

    VP of Portfolio Technology、Nick Ilyadis 著

    マーベル、88SS1092 第 2 世代 NVM Express SSD コントローラを OCP Summit で初披露  

    88SS1092_C-sized データセンターでの SSD: NVMe and Where We’ve Been 

    When solid-state drives (SSDs) were first introduced into the data center, the infrastructure mandated they work within the confines of the then current bus technology, such as Serial ATA (SATA) and Serial Attached SCSI (SAS), developed for rotational media. 最速のハードディスクドライブ(HDD)でさえもちろん SSD には及びませんでしたが、最新のパイプラインでもそれがボトルネックになって SSD テクノロジーを最大限に活用できませんでした。 PCI Express(PCIe)は、すでにネットワーキング、グラフィックス、その他のアドインカード用のトランスポートレイヤーとして展開されていた、最適な高帯域幅バステクノロジーを提供していました。 PCIe が次に選択可能なオプションとなりましたが、このインターフェースでさえ古い HDD ベース SCSI または SATA プロトコルに依存していました。 そのため、NVM Express(NVMe)業界ワーキンググループが、PCIe バス用に開発された標準化されたプロトコルおよびコマンドセットを作成するために結成されました。それは、データセンター内で SSD のすべての利点を最大限に活用できる、複数のパスを使用できるようにすることを目的としていました。 この NVMe 仕様は、現在および将来の NVM テクノロジー用に高帯域幅かつ低遅延なストレージを実現するために、ゼロから設計されました。

    NVMe インターフェースは最適化されたコマンド発行/完了パスを提供します。 デバイスへの単一 I/O キュー内で 64K までのコマンドをサポートすることで、並行運用をサポートします。 さらに、エンドツーエンドデータ保護(T10 DIF および DIX 標準と互換)、拡張エラーレポート/仮想化などのさまざまなエンタープライズ機能のサポートが追加されています。 要するに NVMeは、 PCIe ベースソリッドステートドライブを利用するエンタープライズ、データセンター、およびクライアントシステムのニーズに対応するために設計された、SSD パフォーマンスを最大化するのに役立つ、スケーラブルなホストコントローラインターフェースです。

    SSD Network Fabrics 

    New NVMe controllers from companies like Marvell allowed the data center to share storage data to further maximize cost and performance efficiencies. SSD ネットワークファブリックを作成して SSD クラスターを形成することで、個々のサーバーのストレージをプールしてデータセンターストレージを最大化できます。 さらに、追加サーバー用に共通エンクロージャを作成することで、データを転送してデータアクセスを共有できます。 これらの新しいコンピュートモデルを導入すれば、データセンターは SSD の高速パフォーマンスを最大限に最適化できるだけでなく、これらの SSD をデータセンターにより経済的に展開できるため、全体的なコストを低減して保守を効率化できます。 個々のサーバーに SSD を追加する代わりに、あまり展開されていない SSD を利用して、割り当てが多いサーバーに再展開して使用することもできます。

    これらのネットワークファブリックの機能を示す簡単な例を紹介します。 あるシステムが 7 個のサーバーで構成され、それぞれ PCIe バスに SSD が装着されている場合、各 SSD から SSD クラスターを形成することで、追加ストレージの手段を提供するだけでなく、データアクセスをプールして共有する方法も提供することになります。 たとえば、あるサーバーが 10 パーセントしか利用されず、別のサーバーが過剰に割り当てられている場合、SSD クラスターを導入することで、個々のサーバーに SSD を追加することなく、過剰割り当てのサーバーにストレージを追加できます。 この例でサーバー数が数百になると、コスト、保守、およびパフォーマンス効率が急激に増加します。

    マーベルは、最初の NVMe SSD コントローラ導入時に、これらの新しいタイプのコンピュートモデルの準備を支援してくれました。 その製品は最大で 4 つの PCIe 3.0 レーンをサポートしており、ホストシステムのカスタマイズに応じて 4GB/s または 2Gb/s エンドポイントのすべてに最適でした。 NVMe の先進的なコマンド処理を使用して、圧倒的な IOPS パフォーマンスを実現しました。 高速 PCIe 接続を最大限に利用するために、マーベルの革新的な NVMe 設計はハードウェア自動化を大規模に展開することで、PCIe リンクデータフローを円滑化しました。 これにより、従来のホスト制御ボトルネックが軽減され、真のフラッシュパフォーマンスが引き出されました。

    Second-Generation NVMe Controllers are Here! 

    この最初の製品の後に、Marvell 88SS1092 第 2 世代 NVMe SSD コントローラが登場しています。社内の SSD 検証および他社製 OS/プラットフォーム互換性テストに合格している製品です。 このため、Marvell® 88SS1092 は次世代のストレージおよびデータセンターシステムを後押しする製品として、カリフォルニア州サンノゼで 3 月 8 日と 9 日に催される Open Computing Project (OCP) Summit で発表される予定です。

    The Marvell 88SS1092 is Marvell's second-generation NVMe SSD controller capable of PCIe 3.0 X 4 end points to provide full 4GB/s interface to the host and help remove performance bottlenecks. While the new controller advances a solid-state storage system to a more fully flash-optimized architecture for greater performance, it also includes Marvell's third-generation error-correcting, low-density parity check (LDPC) technology for the additional reliability enhancement, endurance boost and TLC NAND device support on top of MLC NAND. 

    NVMe SSD 共有ストレージの速度/コスト上の利点は、信頼性が高いだけでなく、第 2 世代に入っていることです。 ネットワークのパラダイムシフトが起きています。 SSD のパフォーマンスを最大限に活用するためにゼロから設計された NVMe プロトコルを使用することで、新しいコンピュートモデルは従来の回転メディアの制限なしで作成されています。 SSD パフォーマンスを最大化しながら、SSD クラスターと新しいネットワークファブリックによってプールストレージと共有データアクセスを実現します。 ワーキンググループの努力によって今日のデータセンターが現実になり、新しいコントローラとテクノロジーによって SSD テクノロジーのパフォーマンスとコスト効率が最適化されています。

    Marvell 88SS1092 Second-Generation NVMe SSD Controller

    New process and advanced NAND controller design includes:

    88SS1092-chart-sized

     

  • January 17, 2017

    Marvell Honored with 2016 Analysts’ Choice Award by The Linley Group for its Storage Processor

    By Marvell PR Team

    Linley We pride ourselves on delivering innovative solutions to help our global customers store, move and access data—fast, securely, reliably and efficiently. Underscoring our commitment to innovation, we were named one of the Top 100 Global Innovators by Clarivate Analytics for the fifth consecutive year. In further recognition of our world-class technology, we are excited to share that The Linley Group, one of the most prominent semiconductor analyst firms, has selected Marvell’s ARMADA® SP (storage processor) as the Best Embedded Processor in its 2016 Analysts' Choice Awards. 

    Honoring the best and the brightest in semiconductor technology, the Analysts' Choice Awards recognize the solutions that deliver superior power, performance, features and pricing for their respective end applications and markets. The Linley Group awarded this prestigious accolade to Marvell for its ARMADA SP and recognized the solution’s high level of integration, high performance and low-power operation. 

    Marvell’s ARMADA SP is optimized for the rapid development of high-efficiency and high-density storage solutions for the enterprise and data center markets. With a highly integrated, scalable and flexible architecture, the ARMADA SP incorporates state-of-the-art interfaces and acceleration engines for advanced data processing capabilities, and to support TCO-minded hyperscale environments. 

    To learn more about Marvell’s SP solution, visit: http://www.marvell.com/storage/armada-sp/.

  • October 10, 2014

    ソリッドステートドライブでのエラーコード検出訂正

    Engling Yeo、取締役、組み込み低電力フラッシュコントローラ担当

    ソリッドステートドライブ(SSD)のセット

    信頼性向上のコストを下げる?

    ほとんどのテクノロジーイノベーションと同様に、ソリッドステートドライブ(SSDs)はパフォーマンスは高いが、価格も高い状態で始まりました。 データセンターはその価値を理解しました。すると、テクノロジーの進歩と共に OEM がスリムで軽いフォームファクタ(Apple MacBook Air などの新製品が登場)の潜在能力を認め、SSDは主流のコンシューマテクノロジーになりました。 主流のコンシューマテクノロジーになると、価格に敏感になります。 エンドユーザーはエラー検出訂正(ECC)機構に関する話題には尻込みし、価格に高い関心を示しますが、低価格の SSD でデータが失われると大騒ぎします! したがって、私たちエンジニアは ECC などの機構に関心を持つ必要があります。私たちはこうした話題が好きです。

    さて、デスカッションを始めましょう。 前述のように、ソリッドステートまたはNANDフラッシュデバイスを使用する組み込みストレージを扱うコンシューマ市場は、特にコストに敏感です。 私たちがすることの多くは、コンシューマストレージ製品の損益に影響する問題を軽減するための「信号処理」と総称できます。 ソリッドステートストレージ製品の基本構築ブロックは、浮遊ゲートトランジスタセルです。 浮遊ゲートには、さまざまなレベルの電子電荷を保存できます。 これらのレベルは 1 つ以上の保存済みバイナリビットに対応します。 NAND フラッシュの製造元は一般的に保存密度を高めるために 2 つの方式を採用しています 1) できるだけ多くの浮遊ゲートデバイスを物理的に近付ける、2) 各ストレージ要素にできるだけ多くのビットを保存する(現在の最新テクノロジーでは、浮遊ゲートトランジスタごとに 3 ビットが保存される)。 ただし、どちらの方式も、復元中にビットエラーの発生率が高まる傾向があります。 マーベルは拡張 ECC テクノロジーの作成に挑戦しました。これを効率的なハードウェアアーキテクチャで使用すると、高密度 NAND フラッシュで同じデータ統合を実現できます。そうでないアーキテクチャで使用した場合は、生ビットエラー率が高くなる傾向があります。

    各浮遊ゲートトランジスタには、複雑さに加えて、プログラム-消去(P/E)サイクル数に制限があります。これを超えると、エラー確率がしきい値(トランジスタが役に立たなくなり修理できなくなる)を超えて高くなります。 この制限は消去手続きが原因です。デバイスを非常に高い電圧にさらし、トランジスタが物理的に劣化します。 P/E サイクルの数が増えると、エラーの可能性も高くなります。 適切なエラー検出訂正方法を選択することで、こうした効果を低減し、デバイスの寿命を延ばすことができます。

    マーベルは現在、ソリッドステートストレージアプリケーション用の第 3 世代低密度パリティチェックの開発サイクル中です。 当社の目標は、効果的な ECC 管理および戦略を提供して、お客様が信頼性を犠牲にすることなく、単位ストレージあたりコストを低減できるようにすることです。 お伝えしたいのはこのことです!

アーカイブス