LEADTEK NVIDIA QUADRO RTX4000
2,304 NVIDIA® CUDA® Cores
288 NVIDIA® Tensor Cores
36 NVIDIA® RT Cores
8GB GDDR6 Memory
Up to 416GB/s Memory Bandwidth
6.0 Giga Rays/s Rays Cast
7.1 TFLOPS FP32 Performance
14.2 TFLOPS FP16 Performance
28.5 TOPS INT8 Performance
57.0 TFLOPS of Tensor Operation
Max. Power Consumption: 160W
3x DisplayPort 1.4
The Most Advanced Single-slot Professional Graphics Solution Quadro RTX 4000 combines the NVIDIA Turing GPU architecture with the latest memory and display technologies, to deliver the best performance and features in a single-slot PCI-e form factor. Enjoy greater fluidity with photorealistic rendering, experience faster performance with AI-enabled applications and create detailed, lifelike VR experiences more cost-effectively and across a broader range of workstation chassis configurations.
Turing GPU Architecture
Based on state-of-the-art 12nm FFN (FinFET NVIDIA) high-performance manufacturing process customized for NVIDIA to incorporate 2304 CUDA cores, the Quadro RTX 4000 GPU is the most powerful computing platform for HPC, AI, VR and graphics workloads on professional desktops in a single slot form factor. The Turing GPU architecture enables the biggest leap in computer real-time graphics rendering since NVIDIA’s invention of programmable shaders in 2001. It includes 13.6 billion transistors on die size of 545 mm2. Able to deliver more than 7.1TFLOPS of single-precision (FP32), 14.2 TFLOPS of half-precision (FP16), 28.5 TOPS of integer-precision (INT8), and 57.0 TFLOPs of tensor operation capability, it supports a wide range of compute-intensive workloads flawlessly.
New dedicated hardware-based ray-tracing technology allows the GPU for the first time to real-time render film quality, photorealistic objects and environments with physically accurate shadows, reflections, and refractions. The real-time ray-tracing engine works with NVIDIA OptiX, Microsoft DXR, and Vulkan APIs to deliver a level of realism far beyond what is possible using traditional rendering techniques. RT cores accelerate the Bounding Volume Hierarchy (BVH) traversal and ray casting functions using low number of rays casted through a pixel.
Enhanced Tensor Cores
New mixed-precision cores purpose-built for deep learning matrix arithmetic, delivering 8x TFLOPS for training, compared to previous generation. Quadro RTX 4000 utilizes 288 Tensor Cores; each Tensor Core performs 64 floating point fused multiply-add (FMA) operations per clock, and each SM performs a total of 1024 individual floating point operations per clock. In addition to supporting FP16/FP32 matrix operations, new Tensor Cores added INT8 (2048 integer operations per clock) and experimental INT4 and INT1 (binary) precision modes for matrix operations.
Advanced Shading Technologies
Mesh Shading: Compute-based geometry pipeline to speed geometry processing and culling on geometrically complex models and scenes. Mesh shading provides up to 2x performance improvement on geometry-bound workloads. Variable Rate Shading (VRS): Gain rendering efficiency by varying the shading rate based on scene content, direction of gaze, and motion. Variable rate shading provides similar image quality with 50% reduction in shaded pixels. Texture Space Shading: Object/texture space shading to improve the performance of pixel shader-heavy workloads such as depth-of-field and motion blur. Texture space shading provides greater throughput with increased fidelity by reusing pre-shaded texels for pixel-shader heavy VR workloads.
High Performance GDDR6 Memory
Built with Turing’s vastly optimized 8GB GDDR6 memory subsystem for the industry’s fastest graphics memory (416 GB/s peak bandwidth), Quadro RTX 4000 is the ideal platform for latency-sensitive applications handling large datasets. Quadro RTX 4000 delivers up to greater than 70% more memory bandwidth compared to previous generation.
Single Instruction, Multiple Thread (SIMT)
New independent thread scheduling capability enables finer-grain synchronization and cooperation between parallel threads by sharing resources among small jobs.
Advanced Streaming Multiprocessor (SM) Architecture
Combined shared memory and L1 cache improve performance significantly, while simplifying programming and reducing the tuning required to attain best application performance. Each SM contains 96 KB of L1/shared memory, which can be configured for various capacities depending on compute or graphics workload. For compute cases, up to 64 KB can be allocated to the L1 cache or shared memory, while graphics workload can allocate up to 48 KB for shared memory; 32 KB for L1 and 16 KB for texture units. Combining the L1 data cache with the shared memory reduces latency and provides higher bandwidth.
Double the throughput and reduce storage requirements with 16-bit floating point precision computing to enable the training and deployment of larger neural networks. With independent parallel integer and floating-point data paths, the Turing SM is also much more efficient on workloads with a mix of computation and addressing calculations.
Error Correcting Code (ECC) on Graphics Memory
Meet strict data integrity requirements for mission critical applications with uncompromised computing accuracy and reliability for workstations.
Pixel-level preemption provides more granular control to better support time-sensitive tasks such as VR motion tracking.
Preemption at the instruction-level provides finer grain control over compute tasks to prevent long-running applications from either monopolizing system resources or timing out.
H.264 and HEVC Encode/Decode Engines
Deliver faster than real-time performance for transcoding, video editing, and other encoding applications with two dedicated H.264 and HEVC encode engines and a dedicated decode engine that are independent of 3D/compute pipeline.
NVIDIA GPU BOOST 4.0
Automatically maximize application performance without exceeding the power and thermal envelope of the card. Allows applications to stay within the boost clock state longer under higher temperature threshold before dropping to a secondary temperature setting base clock.
|CUDA Parallel Processing cores||2304|
|NVIDIA Tensor Cores||288|
|NVIDIA RT Cores||36|
|Frame Buffer Memory||8 GB GDDR6|
|Rays Cast||8 Giga Rays/Sec|
|Peak Single Precision (FP32) Performance||7.1 TFLOPS|
|Peak Half Precision (FP16) Performance||14.2 TFLOPS|
|Peak Integer Operation (INT8) Performance||28.5 TOPS|
|Deep Learning TeraFLOPS1||57.0 TFLOPS|
|Memory Bandwidth||Up to 416 GB/s|
|Max Power Consumption||160 W|
|Graphics Bus||PCI Express 3.0 x 16|
|Display Connectors||DP 1.4 (3) + VirtualLink (1)|
|Form Factor||4.4” H x 9.5” L|
|Product Weight||479 g|
|NVIDIA® 3D Vision®and 3D Vision Pro||Support via 3 pin mini DIN|
|Frame Lock||Compatible (with Quadro Sync II)|
|Power Connector||8-pin PCIe|
1 FP16 matrix multiply with FP16 or FP32 accumulate
- Microsoft Windows 10 (64-bit)
- Microsoft Windows 8 and 8.1 (64-bit)
- Microsoft Windows 7 (64-bit)
- Linux® - Full OpenGL implementation, complete with NVIDIA and ARB extensions (64-bit)
3D Graphics Architecture
- Scalable geometry architecture
- Hardware tessellation engine
- NVIDIA® GigaThread™ engine with 3 async copy engines
- Shader Model 5.1 (OpenGL 4.5 and DirectX 12)
- Up to 32K x 32K texture and render processing
- Transparent multisampling and super sampling
- 16x angle independent anisotropic filtering
- 32-bit per-component floating point texture filtering and blending
- 64x full scene antialiasing (FSAA)/128x FSAA in SLI Mode
- Decode acceleration for MPEG-2, MPEG-4 Part 2 Advanced Simple Profile, H.264, HEVC, MVC, VC1, DivX (version 3.11 and
- later), and Flash (10.1 and later)
- Dedicated H.264 & HEVC Encoder2
- Blu-ray dual-stream hardware acceleration (supporting HD picture-in-picture playback)
- NVIDIA GPU Boost (Automatically improves GPU engine throughput to maximize application performance)
2 This feature requires implementation by software applications and is not a stand-alone utility. Please contact email@example.com for details on availability.
NVIDIA CUDA Parallel Processing Architecture
- New RT (Ray Tracing) Core per SM
- Turing SM Architecture (streaming multi-processor design that delivers greater processing efficiency)
- Dynamic Parallelism (GPU dynamically spawns new threads without going back to the CPU)
- Mixed-precision (1-, 4-, 8-, 16-, 32- and 64-bit) computing
- API support includes:- CUDA C, CUDA C++, DirectCompute 5.0, OpenCL, Java, Python, and Fortran
- Error correction codes (ECC) on graphics memory
- Configurable up to 96 KB of RAM (dedicated shared memory size per SM)
Advanced Display Features
- Support for any combination of four connected displays
- Four DisplayPort 1.4 outputs (supporting resolutions such as 3840 x 2160 @ 120 Hz, 5120x2880 @ 60Hz and 7680 x 4320 @ 60Hz)
- DisplayPort to VGA, DisplayPort to DVI (single-link and dual-link) and DisplayPort to HDMI cables (resolution support based on dongle specifications)
- HDR support over DisplayPort 1.4 (SMPTE 2084/2086, BT. 2020) (4K @ 60Hz 10b/12b HEVC Decode, 4K @ 60Hz 10b HEVC Encode)
- HDCP 2.2 support over DisplayPort & HDMI connectors
- 12-bit internal display pipeline (hardware support for 12-bit scanout on supported panels, applications and connection)
- NVIDIA® 3D Vision™ technology, 3D DLP, Interleaved, and other 3D stereo format support
- Full OpenGL quad buffered stereo support
- Underscan/overscan compensation and hardware scaling
- NVIDIA® nView® multi-display technology
- Support for large-scale, ultra-high resolution visualization using the NVIDIA® SVS platform which includes NVIDIA® Mosaic, NVIDIA® Sync and NVIDIA® Warp/Blend technologies
DisplayPort and HDMI Digital Audio
- Support for the following audio modes: - Dolby Digital (AC3), DTS 5.1, Multi-channel (7.1) LPCM, Dolby Digital Plus (DD+), and MPEG-2/MPEG-4 AAC
- DisplayPort Data rates of 48 KHz
- Word sizes of 16-bit, 20-bit, and 24-bit
- HDMI Digital Audio Data rates of 44.1 KHz, 48 KHz, 88.2 KHz, 96 KHz, 176 KHz, and 192 KHz