当社はデータインフラストラクチャの将来を開拓しています。

Posts Tagged 'Edge Computing'

  • October 17, 2018

    マーベルは Arm TechCon 2018 で AWS Greengrass によるエッジコンピューティングを実証します

    By Maen Suleiman, Senior Software Product Line Manager, Marvell and Gorka Garcia, Senior Lead Engineer, Marvell Semiconductor, Inc.

    Thanks to the respective merits of its ARMADA® and OCTEON TX® multi-core processor offerings, Marvell is in a prime position to address a broad spectrum of demanding applications situated at the edge of the network. These applications can serve a multitude of markets that include small business, industrial and enterprise, and will require special technologies like efficient packet processing, machine learning and connectivity to the cloud. As part of its collaboration with Amazon Web Services® (AWS), Marvell will be illustrating the capabilities of edge computing applications through an exciting new demo that will be shown to attendees at Arm TechCon - which is being held at the San Jose Convention Center, October 16th-18th. 

    This demo takes the form of an automated parking lot. An ARMADA processor-based Marvell MACCHIATObin® community board, which integrates the AWS Greengrass® software, is used to serve as an edge compute node. The Marvell edge compute node receives video streams from two cameras that are placed at the entry gate and exit of the parking lot. The ARMADA processor-based compute node runs AWS Greengrass Core; executes two Lambda functions to process the incoming video streams and identify the vehicles entering the garage through their license plates; and subsequently checks whether the vehicles are authorized or unauthorized to enter the parking lot. 

    The first Lambda function will be running Automatic License Plate Recognition (OpenALPR) software and it obtains the license plate number and delivers it together with the gate ID (Entry/Exit) to a Lambda function running on the AWS® cloud that will access a DynamoDB® database. The cloud Lambda function will be responsible for reading the DynamoDB whitelist database and determines if the license plate belongs to an authorized car. This information will be sent back to a second Lambda function on the edge of the network, on the MACCHIATObin board, responsible for managing the parking lot capacity and opening or closing the gate. This Lambda function will be logging the activity in the edge to the AWS Cloud Elasticsearch® service, which works as a backend for Kibana®, an open source data visualization engine. Kibana will enable a remote operative to have direct access to information concerning parking lot occupancy, entry gate status and exit gate status.  Furthermore, the AWS Cognito service authenticates users for access to Kibana. AWS Cognito service     AWS Cloud Lambda function

    After the AWS Cloud Lambda function sends the verdict (allowed/denied) to the second Lambda function running on the MACCHIATObin board, this MACCHIATObin Lambda function will be responsible for communicating with the gate controller, which is comprised of a Marvell ESPRESSObin® board, and is used to open/close the gateway as required.

    The ESPRESSObin board runs as an AWS Greengrass IoT device that will be responsible for opening the gate according to the information received from the MACCHIATObin board’s second Lambda function. 

    This demo showcases the capabilities to run a machine learning algorithm using AWS Lambda at the edge to make the identification process extremely fast. This is possible through the high performance, low-power Marvell OCTEON TX and ARMADA multi-core processors. Marvell infrastructure processors’ capabilities have the potential to cover a range of higher-end networking and security applications that can benefit from the maturity of the Arm® ecosystem and the ability to run machine learning in a multi-core environment at the edge of the network.

    Those visiting the Arm Infrastructure Pavilion (Booth# 216) at Arm TechCon (San Jose Convention Center, October 16th-18th) will be able to see the Marvell Edge Computing demo powered by AWS Greengrass. 

    For information on how to enable AWS Greengrass on Marvell MACCHIATObin and Marvell ESPRESSObin community boards, please visit http://wiki.macchiatobin.net/tiki-index.php?page=AWS+Greengrass+on+MACCHIATObin and http://wiki.espressobin.net/tiki-index.php?page=AWS+Greengrass+on+ESPRESSObin.    

  • 2018年1月10日

    マーベルは CES 2018 で Pixeom Edge Platform を使用して Google Cloud を Network Edge に拡張することによるエッジコンピューティングのデモンストレーションを行いました。

    マーベル、シニア・ソフトウェア・プロダクト・ライン・マネージャー、マエン・スレイマン著

    マルチギガビットネットワークの採用と次世代5Gネットワークの展開計画により、より多くのコンピューティングとストレージサービスがクラウドに移行するにつれて、利用可能なネットワーク帯域幅は拡大し続けるだろう。 ネットワークに接続されたIoT機器やモバイル機器上で実行されるアプリケーションは、ますますインテリジェント化し、計算負荷が高くなっている。 しかし、非常に多くのリソースがクラウドに流れているため、今日のネットワークには逼迫している。 

    次世代アーキテクチャでは、従来のクラウド集中型モデルではなく、ネットワークインフラ全体にインテリジェンスを分散させる必要がある。 高性能コンピューティングハードウェア(関連ソフトウェアを伴う)は、ネットワークのエッジに配置する必要がある。 分散型運用モデルは、エッジデバイスに必要なコンピュートとセキュリティ機能を提供し、魅力的なリアルタイムサービスを可能にし、車載、バーチャルリアリティ、産業用コンピューティングなどのアプリケーションに固有の待ち時間の問題を克服する必要がある。 このようなアプリケーションでは、高解像度のビデオやオーディオコンテンツの分析も必要となる。 

    Through use of its high performance ARMADA® embedded processors, Marvell is able to demonstrate a highly effective solution that will facilitate edge computing implementation on the Marvell MACCHIATObin™ community board using the ARMADA 8040 system on chip (SoC). At CES® 2018, Marvell and Pixeom teams will be demonstrating a fully effective, but not costly, edge computing system using the Marvell MACCHIATObin community board in conjunction with the Pixeom Edge Platform to extend functionality of Google Cloud Platform™ services at the edge of the network. The Marvell MACCHIATObin community board will run Pixeom Edge Platform software that is able to extend the cloud capabilities by orchestrating and running Docker container-based micro-services on the Marvell MACCHIATObin community board. 

    現在、データ量の多い高解像度のビデオコンテンツを分析目的でクラウドに送信することは、ネットワークインフラに大きな負担をかけ、リソースを大量に消費し、コストもかかることが判明している。 Marvell MACCHIATObinハードウェアを基盤として、Pixeomはネットワークエッジでビデオ分析機能を提供するコンテナベースのエッジコンピューティングソリューションのデモを行う。 このユニークなハードウェアとソフトウェアの組み合わせは、より多くのプロセッシングリソースとストレージリソースをネットワークのエッジに配置することを可能にする、高度に最適化されたわかりやすい方法を提供する。 この技術は、運用効率レベルを大幅に向上させ、待ち時間を短縮することができる。 

    The Marvell and Pixeom demonstration deploys Google TensorFlow™ micro-services at the network edge to enable a variety of different key functions, including object detection, facial recognition, text reading (for name badges, license plates, etc.) and intelligent notifications (for security/safety alerts). This technology encompasses the full scope of potential applications, covering everything from video surveillance and autonomous vehicles, right through to smart retail and artificial intelligence. Pixeom offers a complete edge computing solution, enabling cloud service providers to package, deploy, and orchestrate containerized applications at scale, running on premise “Edge IoT Cores.” To accelerate development, Cores come with built-in machine learning, FaaS, data processing, messaging, API management, analytics, offloading capabilities to Google Cloud, and more. Pixeom The MACCHIATObin community board is using Marvell’s ARMADA 8040 processor and has a 64-bit ARMv8 quad-core processor core (running at up to 2.0GHZ), and supports up to 16GB of DDR4 memory and a wide array of different I/Os. Through use of Linux® on the Marvell MACCHIATObin board, the multifaceted Pixeom Edge IoT platform can facilitate implementation of edge computing servers (or cloudlets) at the periphery of the cloud network. Marvell will be able to show the power of this popular hardware platform to run advanced machine learning, data processing, and IoT functions as part of Pixeom’s demo. The role-based access features of the Pixeom Edge IoT platform also mean that developers situated in different locations can collaborate with one another in order to create compelling edge computing implementations. Pixeom supplies all the edge computing support needed to allow Marvell embedded processors users to establish their own edge-based applications, thus offloading operations from the center of the network. ARMADA-8040 Marvell will also be demonstrating the compatibility of its technology with the Google Cloud platform, which enables the management and analysis of deployed edge computing resources at scale. Here, once again the MACCHIATObin board provides the hardware foundation needed by engineers, supplying them with all the processing, memory and connectivity required. 

    Those visiting Marvell’s suite at CES (Venetian, Level 3 - Murano 3304, 9th-12th January 2018, Las Vegas) will be able to see a series of different demonstrations of the MACCHIATObin community board running cloud workloads at the network edge. Make sure you come by!

アーカイブス