Microsoft announced DirectX raytracing a year ago, promising to bring hardware – accelerated ray graphics for computer games. In August, Nvidia announced its RTX 2080 and 2080Ti, a pair of new video cards with the company's new Turing RTX processors. In addition to the common graphics processing hardware, these new chips included two additional sets of additional cores, one set designed to work with machine learning algorithms, and the other for calculating beam graphics. These cards are the first and currently only DirectX Raytracing (DXR) support cards. This will change in April, as Nvidia has announced that the 1
Not surprisingly, the performance of these cards will not match that of the RTX chips. RTX chips use both their raytracing cores and their machine-learning kernels for DXR graphics. In order to achieve an appropriate level of performance, the radiation emulates relatively little light rays and uses machine-based anti-aliasing to remove the transmitted images. The absence of specialized hardware, the DXR of the GTX chips will use 32-bit CUDA integer operations that are already used for calculations and shaders.
Nvidia says the Turing and Pascal maps will take two to three times longer, visualize each frame than the Turing map. This difference is particularly strong for the cards of Pascal. In Turing, the 32-bit integer of the workload used for beam radiation can be run simultaneously with the 32-bit floating-point workload used for other graphical tasks. This is not the case for Pascal, where workloads will have to be performed sequentially.
This lower performance means that Nvidia recommends developers use only simpler raytracing effects on older chips. On RTX, the performance of raytracing may be good enough to allow for global illumination – a form of ray resistance that allows indirect reflection lighting in addition to the usual direct lighting from light sources, but for GTX parts it recommends using simpler tasks such as material-specific reflections