How the Panorama of Memory is Evolving With CXL > 자유게시판
자유게시판

How the Panorama of Memory is Evolving With CXL

페이지 정보

작성자 Danial 작성일25-09-07 08:56 조회6회 댓글0건

본문

digital-storage-media-storage-usb-wallpaper-preview.jpgAs datasets grow from megabytes to terabytes to petabytes, the cost of shifting information from the block storage gadgets across interconnects into system memory, performing computation after which storing the large dataset again to persistent storage is rising by way of time and energy (watts). Moreover, heterogeneous computing hardware more and more needs access to the identical datasets. For instance, a general-function CPU may be used for assembling and preprocessing a dataset and scheduling duties, but a specialised compute engine (like a GPU) is far quicker at training an AI mannequin. A more efficient answer is needed that reduces the transfer of giant datasets from storage on to processor-accessible memory. A number of organizations have pushed the trade toward solutions to these issues by maintaining the datasets in large, byte-addressable, sharable memory. In the nineteen nineties, the scalable coherent interface (SCI) allowed a number of CPUs to entry memory in a coherent manner inside a system. The heterogeneous system structure (HSA)1 specification allowed memory sharing between gadgets of different types on the identical bus.



Within the decade beginning in 2010, the Gen-Z normal delivered a memory-semantic bus protocol with high bandwidth and low latency with coherency. These efforts culminated within the broadly adopted Compute Specific Hyperlink (CXLTM) commonplace being used right this moment. For the reason that formation of the Compute Categorical Link (CXL) consortium, Micron has been and remains an lively contributor. Compute Categorical Hyperlink opens the door for saving time and power. The brand new CXL 3.1 customary allows for byte-addressable, load-store-accessible memory like DRAM to be shared between totally different hosts over a low-latency, excessive-bandwidth interface using business-commonplace parts. This sharing opens new doorways beforehand only attainable via expensive, proprietary tools. With shared memory programs, the info could be loaded into shared memory once after which processed a number of instances by a number of hosts and accelerators in a pipeline, without incurring the cost of copying data to native memory, block storage protocols and latency. Furthermore, some community information transfers could be eliminated.

vLEek3I3wac

For example, data will be ingested and saved in shared memory over time by a number related to a sensor array. Once resident in memory, a second host optimized for this objective can clean and preprocess the information, adopted by a third host processing the data. Meanwhile, the first host has been ingesting a second dataset. The only info that needs to be passed between the hosts is a message pointing to the data to point it's ready for processing. The massive dataset never has to move or be copied, saving bandwidth, energy and memory house. One other instance of zero-copy knowledge sharing is a producer-consumer knowledge model where a single host is answerable for amassing information in memory, after which a number of different hosts devour the info after it’s written. As before, the producer simply needs to send a message pointing to the tackle of the information, signaling the opposite hosts that it’s prepared for consumption.



Zero-copy knowledge sharing can be additional enhanced by CXL memory modules having built-in processing capabilities. For example, if a CXL memory module can carry out a repetitive mathematical operation or data transformation on a knowledge object entirely within the module, system bandwidth and power might be saved. These savings are achieved by commanding the memory module to execute the operation without the info ever leaving the module using a functionality called near memory compute (NMC). Additionally, the low-latency CXL fabric could be leveraged to ship messages with low overhead in a short time from one host to another, between hosts and memory modules, or between memory modules. These connections can be utilized to synchronize steps and share pointers between producers and neural entrainment audio consumers. Past NMC and communication advantages, superior memory telemetry may be added to CXL modules to offer a brand new window into real-world utility traffic within the shared devices2 without burdening the host processors.



With the insights gained, working systems and management software program can optimize information placement (memory tiering) and tune different system parameters to fulfill working goals, from efficiency to power consumption. Further memory-intensive, value-add functions comparable to transactions are additionally ideally suited to NMC. Micron is excited to combine giant, scale-out CXL international shared memory and enhanced memory options into our memory lake idea. As datasets develop from megabytes to terabytes to petabytes, the cost of moving knowledge from the block storage units throughout interconnects into system memory, performing computation and then storing the large dataset again to persistent storage is rising in terms of time and power (watts). Additionally, heterogeneous computing hardware more and more wants access to the identical datasets. For instance, a normal-goal CPU could also be used for assembling and preprocessing a dataset and scheduling tasks, but a specialized compute engine (like a GPU) is much sooner at coaching an AI model.

댓글목록

등록된 댓글이 없습니다.

CUSTOMER CENTER

Tel.
02-2677-1472
이메일
jisiri@naver.com
Time.
평일 AM 9:00 - PM 6:00
점심 PM 12:00 - PM 1:00
토·일·공휴일 휴무(365일온라인상담가능)

황칠가족
서울시 영등포구 63로 40 라이프오피스텔 1019호 | 대표자명 : 이명은 | 사업자등록번호 : 826-14-00942
Tel : 02-2677-1472 | 개인정보관리책임자 : 이명은 (jisiri@naver.com)
Copyright © 2019 황칠가족. All Rights Reserved.