NVIDIA-Kepler-GK110-Architecture-Whitepaper

更新时间:2023-04-07 07:16:01 阅读量: 教育文库 文档下载

说明:文章内容仅供预览,部分内容可能不全。下载后的文档,内容与下面显示的完全一致。下载之前请确认下面内容是否您想要的,是否完整无缺。

Whitepaper

NVIDIA’s Next Generation

CUDA TM

Compute Architecture: Kepler TM

GK110

The Fastest, Most Efficient HPC Architecture Ever Built

V1.0

Table of Contents

Kepler GK110 – The Next Generation GPU Computing Architecture (3)

Kepler GK110 ‐ Extreme Performance, Extreme Efficiency (4)

? Dynamic Parallelism (5)

? Hyper-Q (5)

? Grid Management Unit (5)

? NVIDIA GPUDirect? (5)

An Overview of the GK110 Kepler Architecture (6)

Performance per Watt (7)

Streaming Multiprocessor (SMX) Architecture (8)

SMX Processing Core Architecture (9)

Quad Warp Scheduler (9)

New ISA Encoding: 255 Registers per Thread (11)

Shuffle Instruction (11)

Atomic Operations (11)

Texture Improvements (12)

Kepler Memory Subsystem – L1, L2, ECC (13)

64 KB Configurable Shared Memory and L1 Cache (13)

48KB Read‐Only Data Cache (13)

Improved L2 Cache (14)

Memory Protection Support (14)

Dynamic Parallelism (14)

Hyper‐Q (17)

Grid Management Unit ‐ Efficiently Keeping the GPU Utilized (19)

NVIDIA GPUDirect? (20)

Conclusion (21)

Appendix A - Quick Refresher on CUDA (22)

CUDA Hardware Execution (23)

Kepler GK110 – The Next Generation GPU Computing Architecture

As the demand for high performance parallel computing increases across many areas of science, medicine, engineering, and finance, NVIDIA continues to innovate and meet that demand with extraordinarily powerful GPU computing architectures. NVIDIA’s existing Fermi GPUs have already redefined and accelerated High Performance Computing (HPC) capabilities in areas such as seismic processing, biochemistry simulations, weather and climate modeling, signal processing, computational finance, computer aided engineering, computational fluid dynamics, and data analysis. NVIDIA’s new Kepler GK110 GPU raises the parallel computing bar considerably and will help solve the world’s most difficult computing problems.

By offering much higher processing power than the prior GPU generation and by providing new methods to optimize and increase parallel workload execution on the GPU, Kepler GK110 simplifies creation of parallel programs and will further revolutionize high performance computing.

Kepler GK110 ‐ Extreme Performance, Extreme Efficiency

Comprising 7.1 billion transistors, Kepler GK110 is not only the fastest, but also the most architecturally complex microprocessor ever built. Adding many new innovative features focused on compute performance, GK110 was designed to be a parallel processing powerhouse for Tesla? and the HPC market.

Kepler GK110 will provide over 1 TFlop of double precision throughput with greater than 80% DGEMM efficiency versus 60‐65% on the prior Fermi architecture.

In addition to greatly improved performance, the Kepler architecture offers a huge leap forward in power efficiency, delivering up to 3x the performance per watt of Fermi.

Kepler GK110 Die Photo

The following new features in Kepler GK110 enable increased GPU utilization, simplify parallel program design, and aid in the deployment of GPUs across the spectrum of compute environments ranging from personal workstations to supercomputers:

?Dynamic Parallelism – adds the capability for the GPU to generate new work for itself, synchronize on results, and control the scheduling of that work via dedicated, accelerated

hardware paths, all without involving the CPU. By providing the flexibility to adapt to the

amount and form of parallelism through the course of a program's execution, programmers can expose more varied kinds of parallel work and make the most efficient use the GPU as a

computation evolves. This capability allows less‐structured, more complex tasks to run easily

and effectively, enabling larger portions of an application to run entirely on the GPU. In addition, programs are easier to create, and the CPU is freed for other tasks.

?Hyper-Q – Hyper‐Q enables multiple CPU cores to launch work on a single GPU simultaneously, thereby dramatically increasing GPU utilization and significantly reducing CPU

idle times. Hyper‐Q increases the total number of connections (work queues) between the host and the GK110 GPU by allowing 32 simultaneous, hardware‐managed connections (compared to the single connection available with Fermi). Hyper‐Q is a flexible solution that allows separate

connections from multiple CUDA streams, from multiple Message Passing Interface (MPI)

processes, or even from multiple threads within a process. Applications that previously

encountered false serialization across tasks, thereby limiting achieved GPU utilization, can see

up to dramatic performance increase without changing any existing code.

?Grid Management Unit – Enabling Dynamic Parallelism requires an advanced, flexible grid management and dispatch control system. The new GK110 Grid Management Unit (GMU) manages and prioritizes grids to be executed on the GPU. The GMU can pause the dispatch of

new grids and queue pending and suspended grids until they are ready to execute, providing the flexibility to enable powerful runtimes, such as Dynamic Parallelism. The GMU ensures both

CPU‐ and GPU‐generated workloads are properly managed and dispatched.

?NVIDIA GPUDirect?– NVIDIA GPUDirect? is a capability that enables GPUs within a single computer, or GPUs in different servers located across a network, to directly exchange

data without needing to go to CPU/system memory. The RDMA feature in GPUDirect allows

third party devices such as SSDs, NICs, and IB adapters to directly access memory on multiple

GPUs within the same system, significantly decreasing the latency of MPI send and receive

messages to/from GPU memory. It also reduces demands on system memory bandwidth and

frees the GPU DMA engines for use by other CUDA tasks. Kepler GK110 also supports other

GPUDirect features including Peer‐to‐Peer and GPUDirect for Video.

An Overview of the GK110 Kepler Architecture

Kepler GK110 was built first and foremost for Tesla, and its goal was to be the highest performing parallel computing microprocessor in the world. GK110 not only greatly exceeds the raw compute horsepower delivered by Fermi, but it does so efficiently, consuming significantly less power and generating much less heat output.

A full Kepler GK110 implementation includes 15 SMX units and six 64‐bit memory controllers. Different products will use different configurations of GK110. For example, some products may deploy 13 or 14 SMXs.

Key features of the architecture that will be discussed below in more depth include:

? The new SMX processor architecture

? An enhanced memory subsystem, offering additional caching capabilities, more bandwidth at each level of the hierarchy, and a fully redesigned and substantially faster DRAM I/O

implementation.

? Hardware support throughout the design to enable new programming model capabilities

Kepler

GK110 Full chip block diagram

Kepler GK110 supports the new CUDA Compute Capability 3.5. (For a brief overview of CUDA see

Appendix A ‐ Quick Refresher on CUDA ). The following table compares parameters of different Compute Capabilities for Fermi and Kepler GPU architectures:

FERMI GF100

FERMI GF104

KEPLER GK104

KEPLER GK110

Compute Capability 2.0 2.1 3.0 3.5Threads / Warp

323232 32Max Warps / Multiprocessor 484864 64Max Threads / Multiprocessor 1536

1536

2048 2048Max Thread Blocks / Multiprocessor 8816 1632‐bit Registers / Multiprocessor 32768

32768

65536

65536Max Registers / Thread 636363 255Max Threads / Thread Block

102410241024 1024Shared Memory Size Configurations (bytes) 16K 16K 16K 16K 48K 48K 32K 32K

48K 48K Max X Grid Dimension 2^16‐1

2^16‐1

2^32‐1

2^32‐1Hyper ‐Q

No No No Yes Dynamic Parallelism

No

No

No

Yes

Compute Capability of Fermi and Kepler GPUs

Performance per Watt

A principal design goal for the Kepler architecture was improving power efficiency. When designing Kepler, NVIDIA engineers applied everything learned from Fermi to better optimize the Kepler

architecture for highly efficient operation. TSMC’s 28nm manufacturing process plays an important role in lowering power consumption, but many GPU architecture modifications were required to further reduce power consumption while maintaining great performance.

Every hardware unit in Kepler was designed and scrubbed to provide outstanding performance per watt. The best example of great perf/watt is seen in the design of Kepler GK110’s new Streaming

Multiprocessor (SMX), which is similar in many respects to the SMX unit recently introduced in Kepler GK104, but includes substantially more double precision units for compute algorithms.

Streaming Multiprocessor (SMX) Architecture

Kepler GK110’s new SMX introduces several architectural innovations that make it not only the most powerful multiprocessor we’ve built, but also the most programmable and power ‐efficient.

SMX: 192 single ‐precision CUDA cores, 64 double ‐precision units, 32 special function units (SFU), and 32 load/store units (LD/ST).

SMX Processing Core Architecture

Each of the Kepler GK110 SMX units feature 192 single‐precision CUDA cores, and each core has fully pipelined floating‐point and integer arithmetic logic units. Kepler retains the full IEEE 754‐2008 compliant single‐ and double‐precision arithmetic introduced in Fermi, including the fused multiply‐add (FMA) operation.

One of the design goals for the Kepler GK110 SMX was to significantly increase the GPU’s delivered double precision performance, since double precision arithmetic is at the heart of many HPC applications. Kepler GK110’s SMX also retains the special function units (SFUs) for fast approximate transcendental operations as in previous‐generation GPUs, providing 8x the number of SFUs of the Fermi GF110 SM.

Similar to GK104 SMX units, the cores within the new GK110 SMX units use the primary GPU clock rather than the 2x shader clock. Recall the 2x shader clock was introduced in the G80 Tesla‐architecture GPU and used in all subsequent Tesla‐ and Fermi‐architecture GPUs. Running execution units at a higher clock rate allows a chip to achieve a given target throughput with fewer copies of the execution units, which is essentially an area optimization, but the clocking logic for the faster cores is more power‐hungry. For Kepler, our priority was performance per watt. While we made many optimizations that benefitted both area and power, we chose to optimize for power even at the expense of some added area cost, with a larger number of processing cores running at the lower, less power‐hungry GPU clock.

Quad Warp Scheduler

The SMX schedules threads in groups of 32 parallel threads called warps. Each SMX features four warp schedulers and eight instruction dispatch units, allowing four warps to be issued and executed concurrently. Kepler’s quad warp scheduler selects four warps, and two independent instructions per warp can be dispatched each cycle. Unlike Fermi, which did not permit double precision instructions to be paired with other instructions, Kepler GK110 allows double precision instructions to be paired with certain other instructions that have no register file reads, including load/store, texture, and some integer instructions.

Each Kepler SMX contains 4 Warp Schedulers, each with dual Instruction Dispatch Units. A single Warp Scheduler Unit is shown above.

We also looked for opportunities to optimize the power in the SMX warp scheduler logic. For example, both Kepler and Fermi schedulers contain similar hardware units to handle the scheduling function, including:

a)Register scoreboarding for long latency operations (texture and load)

b)Inter‐warp scheduling decisions (e.g., pick the best warp to go next among eligible candidates)

c)Thread block level scheduling (e.g., the GigaThread engine)

However, Fermi’s scheduler also contains a complex hardware stage to prevent data hazards in the math datapath itself. A multi‐port register scoreboard keeps track of any registers that are not yet ready with valid data, and a dependency checker block analyzes register usage across a multitude of fully decoded warp instructions against the scoreboard, to determine which are eligible to issue.

For Kepler, we recognized that this information is deterministic (the math pipeline latencies are not variable), and therefore it is possible for the compiler to determine up front when instructions will be ready to issue, and provide this information in the instruction itself. This allowed us to replace several complex and power‐expensive blocks with a simple hardware block that extracts the pre‐determined latency information and uses it to mask out warps from eligibility at the inter‐warp scheduler stage.

New ISA E The numb thread ac Fermi ma compellin chromody to 5.3x du memory.

Shuffle In To furthe within a w store and threads w permutat thread. U “butterfly

Shuffle of carried ou block, sinc FFT, whic This examp Atomic O Atomic m correctly add, min, operation

used for p serialize

t Encoding: 255ber of registe cess to up to y see substan ng example ca ynamics) calc ue to the abili nstruction

r improve per warp to share load operati within

a warp ion. Shuffle s seful shuffle s y” style permu ffers a perform ut in a single s ce data excha h requires da le shows some o perations

emory opera perform read

max, and com

ns are perform parallel sortin hread

execut 5 Registers p rs that can be 255 registers ntial speedup an be seen in ulations using ity to use man rformance, K e data. Previo ons to pass th can read valu upports arbit subsets

inclu utations amo mance advan step. Shuffle anged at the w ta sharing wi of the variation ations are imp d ‐modify ‐writ mpare ‐and ‐sw med without i ng, reduction tion.

er Thread

e accessed by s. Codes that s as a result o the QUDA lib g CUDA. QUD ny more regis epler implem usly, sharing he data throu ues from othe trary indexed ding next ‐thr ng the thread tage over sha also can redu warp level ne thin a warp, s possible using portant in par te operations wap are atom interruption b operations,

a y a thread has exhibit high r of the increas brary for perf DA fp64‐based sters per thre ments a new S data between ugh shared m er threads in t references –read (offset u ds in a warp, ared memory uce the amou ever needs to a 6% perform g the new Shuffl rallel program on shared da mic in the sens by other thre and building

d s been quadr register press sed availabl

e forming lattic d algorithms ead and exper Shuffle instruc n threads wit emory. With the warp in ju i.e. any thre p or down by are also avail y, in that a sto nt o

f shared o be placed in mance gain ca e instruction in mming, allowi ata structures se that the re eads. Atomic m data

structure upled in GK1sure or spillin per ‐thread re e QCD (quant see performa riencing fewe ction, which a thin a warp re the Shuffle in ust about any ad reads from y a fixed amo lable as CUDA ore ‐and ‐load memory nee shared mem n be seen jus Kepler.

ing concurren s. Atomic ope ead, modify, a memory oper es in parallel

10, allowing e ng behavior in egister count tum

ance increase er spills to loc allows thread equired separ nstruction,

y imaginable m any other

unt) and XOR A intrinsics. operation is ded per threa mory. In the ca st by using Sh nt threads to

erations such and write rations are w without

locks each n . A es up cal ds rate R ad ase of huffle.

as widely s that

Atomic operation throughput on Kepler GK110 is substantially improved compared to the Fermi generation. Atomic operation throughput to a common address is improved by 9x, to one operation per clock. Atomic operation throughput to independent addresses is also significantly accelerated and logic to handle address conflicts has been made more efficient. Atomic operations can often be processed at rates similar to generic load operations. This speed increase makes atomics fast enough to use frequently within kernel inner loops, eliminating explicit reduction passes that were previously required to consolidate results.

Kepler GK110 also introduces additional native support for 64‐bit atomic operations. In addition to atomicAdd, atomicCAS, and atomicExch (supported by Fermi and Kepler GK104), GK110 supports native:

?atomicMin

?atomicMax

?atomicAnd

?atomicOr

?atomicXor

Other atomic operations which are not supported natively (for example 64‐bit floating point atomics) may be emulated using the compare‐and‐swap (CAS) instruction.

Texture Improvements

The GPU’s dedicated hardware Texture units are a valuable resource for compute programs with a need to sample or filter image data. The texture throughput in Kepler is significantly increased compared to Fermi – each SMX unit contains 16 texture filtering units, a 4x increase vs the Fermi GF110 SM.

In addition, Kepler changes the way texture state is managed. In the Fermi generation, for the GPU to reference a texture, it had to be assigned a “slot” in a fixed‐size binding table prior to grid launch. The number of slots in that table ultimately limits how many unique textures a program can read from at run time. Ultimately, a program was limited to accessing only 128 simultaneous textures in Fermi.

With bindless textures in Kepler, the additional step of using slots isn’t necessary: texture state is now saved as an object in memory and the hardware fetches these state objects on demand, making binding tables obsolete. This effectively eliminates any limits on the number of unique textures that can be referenced by a compute program. Instead, programs can map textures at any time and pass texture handles around as they would any other pointer.

Kepler Memory Subsystem – L1, L2, ECC

Kepler’s memory hierarchy is organized similarly to Fermi. The Kepler architecture supports a unified memory request path for loads and stores, with an L1 cache per SMX multiprocessor. Kepler GK110 also enables compiler ‐directed use of an additional new cache for read ‐only data, as described below.

64 KB Configurable Shared Memory and L1 Cache

In the Kepler GK110 architecture, as in the previous generation Fermi architecture, each SMX has 64 KB of on ‐chip memory that can be configured as 48 KB of Shared memory with 16 KB of L1 cache, or as 16 KB of shared memory with 48 KB of L1 cache. Kepler now allows for additional flexibility in configuring the allocation of shared memory and L1 cache by permitting a 32KB / 32KB split between shared memory and L1 cache. To support the increased throughput of each SMX unit, the shared memory bandwidth for 64b and larger load operations is also doubled compared to the Fermi SM, to 256B per core clock.

48KB Read ‐Only Data Cache

In addition to the L1 cache, Kepler introduces a 48KB cache for data that is known to be read ‐only for the duration of the function. In the Fermi generation, this cache was accessible only by the Texture unit. Expert programmers often found it advantageous to load data through this path explicitly by mapping their data as textures, but this approach had many limitations.

In Kepler, in addition to significantly increasing the capacity of this cache along with the texture horsepower increase, we decided to make the cache directly accessible to the SM for general load operations. Use of the read‐only path is beneficial because it takes both load and working set footprint off of the Shared/L1 cache path. In addition, the Read‐Only Data Cache’s higher tag bandwidth supports full speed unaligned memory access patterns among other scenarios.

Use of this path is managed automatically by the compiler – access to any variable or data structure that is known to be constant through programmer use of the C99‐standard “const __restrict” keyword will be tagged by the compiler to be loaded through the Constant Data Cache.

Improved L2 Cache

The Kepler GK110 GPU features 1536KB of dedicated L2 cache memory, double the amount of L2 available in the Fermi architecture. The L2 cache is the primary point of data unification between the SMX units, servicing all load, store, and texture requests and providing efficient, high speed data sharing across the GPU. The L2 cache on Kepler offers up to 2x of the bandwidth per clock available in Fermi. Algorithms for which data addresses are not known beforehand, such as physics solvers, ray tracing, and sparse matrix multiplication especially benefit from the cache hierarchy. Filter and convolution kernels that require multiple SMs to read the same data also benefit.

Memory Protection Support

Like Fermi, Kepler’s register files, shared memories, L1 cache, L2 cache and DRAM memory are protected by a Single‐Error Correct Double‐Error Detect (SECDED) ECC code. In addition, the Read‐Only Data Cache supports single‐error correction through a parity check; in the event of a parity error, the cache unit automatically invalidates the failed line, forcing a read of the correct data from L2.

ECC checkbit fetches from DRAM necessarily consume some amount of DRAM bandwidth, which results in a performance difference between ECC‐enabled and ECC‐disabled operation, especially on memory bandwidth‐sensitive applications. Kepler GK110 implements several optimizations to ECC checkbit fetch handling based on Fermi experience. As a result, the ECC on‐vs‐off performance delta has been reduced by an average of 66%, as measured across our internal compute application test suite.

Dynamic Parallelism

In a hybrid CPU‐GPU system, enabling a larger amount of parallel code in an application to run efficiently and entirely within the GPU improves scalability and performance as GPUs increase in perf/watt. To accelerate these additional parallel portions of the application, GPUs must support more varied types of parallel workloads.

Dynamic Parallelism is a new feature introduced with Kepler GK110 that allows the GPU to generate new work for itself, synchronize on results, and control the scheduling of that work via dedicated, accelerated hardware paths, all without involving the CPU.

Fermi was very good at processing large parallel data structures when the scale and parameters of the problem were known at kernel launch time. All work was launched from the host CPU, would run to completion, and return a result back to the CPU. The result would then be used as part of the final solution, or would be analyzed by the CPU which would then send additional requests back to the GPU for additional processing.

In Kepler GK110 any kernel can launch another kernel, and can create the necessary streams, events and manage the dependencies needed to process additional work without the need for host CPU interaction. This architectural innovation makes it easier for developers to create and optimize recursive and data‐dependent execution patterns, and allows more of a program to be run directly on GPU. The system CPU can then be freed up for additional tasks, or the system could be configured with a less powerful CPU to carry out the same workload.

Dynamic Parallelism allows more parallel code in an application to be launched directly by the GPU onto itself (right side of image) rather than requiring CPU intervention (left side of image).

Dynamic Parallelism allows more varieties of parallel algorithms to be implemented on the GPU, including nested loops with differing amounts of parallelism, parallel teams of serial control‐task threads, or simple serial control code offloaded to the GPU in order to promote data‐locality with the parallel portion of the application.

Because a kernel has the ability to launch additional workloads based on intermediate, on‐GPU results, programmers can now intelligently load‐balance work to focus the bulk of their resources on the areas of the problem that either require the most processing power or are most relevant to the solution.

One example would be dynamically setting up a grid for a numerical simulation – typically grid cells are focused in regions of greatest change, requiring an expensive pre‐processing pass through the data. Alternatively, a uniformly coarse grid could be used to prevent wasted GPU resources, or a uniformly fine grid could be used to ensure all the features are captured, but these options risk missing simulation features or “over‐spending” compute resources on regions of less interest.

With Dynamic Parallelism, the grid resolution can be determined dynamically at runtime in a data‐dependent manner. Starting with a coarse grid, the simulation can “zoom in” on areas of interest while avoiding unnecessary calculation in areas with little change. Though this could be accomplished using a sequence of CPU‐launched kernels, it would be far simpler to allow the GPU to refine the grid itself by analyzing the data and launching additional work as part of a single simulation kernel, eliminating interruption of the CPU and data transfers between the CPU and GPU.

–Image attribution Charles Reid

The above example illustrates the benefits of using a dynamically sized grid in a numerical simulation. To meet peak precision requirements, a fixed resolution simulation must run at an excessively fine resolution across the entire simulation domain, whereas a multi‐resolution grid applies the correct simulation resolution to each area based on local variation.

Hyper ‐One of th of work fr launches f hardware within on this could

complexit Kepler GK number o the GPU b connectio CUDA stre within a p limiting G Hyper ‐Q pe Each CUD optimized concurren

dependen Q

e challenges rom multiple from separat e work queue e stream to c d be alleviated ty increases, t K110 improve o

f connection by allowin

g 32on available w eams, from m process. Appli PU utilization ermits more sim DA stream is m d, and operat ntly without n ncies.

in the past ha streams. The e streams, bu . This allowed complete befo d to some ext this can beco es on this func s (work queu 2 simultaneo with Fermi). H multiple Mess ications that n, can see up ultaneous conn managed with ions in one st needing to

sp as been keep e Fermi archit ut ultimately d for false intr ore additiona tent through me more and ctionality with es) between us, hardware Hyper ‐Q is a fl age Passing I previously en to a 32x perf ections betwee hin its own ha tream will no ecifically

tailo ing the GPU s ecture suppo the streams w ra ‐stream de al kernels in a the use of a b d more difficu h the new Hy the host and e ‐managed co exible solutio nterface (MP ncountered fa formance incr n CPU and GPU ardware work longer block or the

launch supplied with orted 16‐way were all mult pendencies, r separate stre breadth ‐first ult to manage yper ‐Q feature the CUDA W onnections (co on that allows PI) processes, alse serializat rease without .

k queue, inter other stream order to

elim h an optimally concurrency tiplexed into t requiring dep eam could be launch order e efficiently.

e. Hyper ‐Q in Work Distribut ompared to t s connections or even from ion across tas t changing an r ‐stream depe ms, enabling s minate

possib y scheduled lo of kernel the same

pendent kern e executed. W , as program ncreases the t tor (CWD) log the single s from multip m multiple thr sks, thereby ny existing co endencies are streams to ex ble false

oad els While

total

gic in ple reads de.

e xecute

Hyper‐Q offers significant benefits for use in MPI‐based parallel computer systems. Legacy MPI‐based algorithms were often created to run on multi‐core CPU systems, with the amount of work assigned to each MPI process scaled accordingly. This can lead to a single MPI process having insufficient work to fully occupy the GPU. While it has always been possible for multiple MPI processes to share a GPU, these processes could become bottlenecked by false dependencies. Hyper‐Q removes those false dependencies, dramatically increasing the efficiency of GPU sharing across MPI processes.

Hyper‐Q working with CUDA Streams: In the Fermi model shown on the left, only (C,P) & (R,X) can run concurrently due to intra‐stream dependencies caused by the single hardware work queue. The Kepler Hyper‐Q model allows all streams to run concurrently using separate work queues.

Grid Management Unit ‐ Efficiently Keeping the GPU Utilized

New features in Kepler GK110, such as the ability for CUDA kernels to launch work directly on the GPU with Dynamic Parallelism, required that the CPU‐to‐GPU workflow in Kepler offer increased functionality over the Fermi design. On Fermi, a grid of thread blocks would be launched by the CPU and would always run to completion, creating a simple unidirectional flow of work from the host to the SMs via the CUDA Work Distributor (CWD) unit. Kepler GK110 was designed to improve the CPU‐to‐GPU workflow by allowing the GPU to efficiently manage both CPU‐ and CUDA‐created workloads.

We discussed the ability of the Kepler GK110 GPU to allow kernels to launch work directly on the GPU, and it’s important to understand the changes made in the Kepler GK110 architecture to facilitate these new functions. In Kepler, a grid can be launched from the CPU just as was the case with Fermi, however new grids can also be created programmatically by CUDA within the Kepler SMX unit. To manage both CUDA‐created and host‐originated grids, a new Grid Management Unit (GMU) was introduced in Kepler GK110. This control unit manages and prioritizes grids that are passed into the CWD to be sent to the SMX units for execution.

The CWD in Kepler holds grids that are ready to dispatch, and it is able to dispatch 32 active grids, which is double the capacity of the Fermi CWD. The Kepler CWD communicates with the GMU via a bi‐directional link that allows the GMU to pause the dispatch of new grids and to hold pending and suspended grids until needed. The GMU also has a direct connection to the Kepler SMX units to permit grids that launch additional work on the GPU via Dynamic Parallelism to send the new work back to GMU to be prioritized and dispatched. If the kernel that dispatched the additional workload pauses, the GMU will hold it inactive until the dependent work has completed.

The redesigned Kepler HOST to GPU workflow shows the new Grid Management Unit, which allows it to manage the actively dispatching grids, pause dispatch, and hold pending and suspended grids.

NVIDIA GPUDirect?

When working with a large amount of data, increasing the data throughput and reducing latency is vital to increasing compute performance. Kepler GK110 supports the RDMA feature in NVIDIA GPUDirect, which is designed to improve performance by allowing direct access to GPU memory by third ‐party devices such as IB adapters, NICs, and SSDs. When using CUDA 5.0, GPUDirect provides the following important features:

? Direct memory access (DMA) between NIC and GPU without the need for CPU ‐side data

buffering.

? Significantly improved MPISend/MPIRecv efficiency between GPU and other nodes in a network. ? Eliminates CPU bandwidth and latency bottlenecks

? Works with variety of 3rd ‐party network, capture, and storage devices

Applications like reverse time migration (used in seismic imaging for oil & gas exploration) distribute the large imaging data across several GPUs. Hundreds of GPUs must collaborate to crunch the data, often communicating intermediate results. GPUDirect enables much higher aggregate bandwidth for this GPU‐to‐GPU communication scenario within a server and across servers with the P2P and RDMA features.

Kepler GK110 also supports other GPUDirect features such as Peer‐to‐Peer and GPUDirect for Video.

GPUDirect RDMA allows direct access to GPU memory from 3rd‐party devices such as network adapters, which translates into direct transfers between GPUs across nodes as well.

Conclusion

With the launch of Fermi in 2010, NVIDIA ushered in a new era in the high performance computing (HPC) industry based on a hybrid computing model where CPUs and GPUs work together to solve computationally‐intensive workloads. Now, with the new Kepler GK110 GPU, NVIDIA again raises the bar for the HPC industry.

Kepler GK110 was designed from the ground up to maximize computational performance and throughput computing with outstanding power efficiency. The architecture has many new innovations such SMX, Dynamic Parallelism, and Hyper‐Q that make hybrid computing dramatically faster, easier to program, and applicable to a broader set of applications. Kepler GK110 GPUs will be used in numerous systems ranging from workstations to supercomputers to address the most daunting challenges in HPC.

本文来源:https://www.bwwdw.com/article/d0tl.html

Top