NVIDIA RTX – A Useless Technology or Something Badly Misunderstood?


NVIDIA’s latest graphics cards, the GeForce RTX series, have been taking a beating in the media for their apparent lack of performance improvement over the previous generation. The question that needs to be asked is, is this negativity fair? To answer that, we need to delve into where the expectations come from and what the cards are actually designed to achieve.

Firstly, where exactly do these expectations come from? At no stage did NVIDIA promise a 30% performance increase over the previous generation’s GeForce GTX 1080 Ti using traditional (rasterization) render methods. In fact, there was no such promise made regardless of the percentage used. There were many sites that claimed to leak performance figures for the cards, and plenty of rumours going around, but one cannot hold NVIDIA to claims that they themselves didn’t make.

NVIDIA spent 10 years developing the technology in the cards, dedicating the resources of hundreds of engineers to bring us performance that last generation was only available in render farms all neatly packaged into a single GPU with a price tag that high-end enthusiasts can afford. To trash RTX without a decent understanding is a massive slap in the face to the engineers who spent sleepless nights making this a reality.

The next issue we face is adoption. There is no current game engine around which was written from the ground up to support ray tracing, with Shadow of the Tomb Raider and Battlefield V both using rasterization engines that were patched to add support for ray tracing. Until we have an engine with native support for ray tracing, we can’t truthfully judge what is is or isn’t capable of, or the performance hit incurred. We currently only have one game with consumer RTX support, Battlefield V, and that was only patched in today.

Adoption will be a slow process which will happen as software catches up to the hardware. When the original 3dfx Voodoo was released there was not overnight widespread support for Glide. When the original GeForce 256 was release there wasn’t overnight widespread support for hardware T&L. When the GeForce 3 was released there wasn’t overnight widespread support for programmable pixel and vertex shaders. All of these took time for mainstream support and optimization, and RTX will be no different.

For the first time we are looking at image generation in game differently and progress can only go one way. Wait until we have engines written with ray tracing in mind from the get go before judging its performance, image quality or stability on hurriedly-patched games. The developers of Shadow of the Tomb Raider and Battlefield V have not had access to the tools to implement ray tracing for long, so they’re learning on the go and things will only get better from here.

If you look at the titles available at release of a new console and compare the graphics quality to games that come out several years later you will see an immense difference. This isn’t due to the hardware changing, as it remains the same throughout the life of the console. It happens because software is catching up to the hardware, and game developers are learning how to more efficiently make use of the hardware available.

If we do one of the dreaded car analogies, most probably wouldn’t consider buying a car today without ABS. The first car to introduce all wheel ABS, the Mercedes W116, cost over $ 30,000 back in 1978, which is around $ 120,000 today. Today we don’t need to spent upwards of R 2,000,000 to get ABS – it’s on almost every entry level car costing about 5% of that amount. Time brings down the price of technological advancements and this will be no different. Every “first” comes in at a higher price than its price a little way down the line. There will never be a “right time” for the initial release as without the initial release there will be no further progress.

So what if the RTX 2080 Ti gets 400 frames per second instead of 180 frames per second using traditional (rasterization) render methods on both? Both figures are beyond the level of perception of the human eye, so it becomes an arbitrary figure. What we need is a way to drastically increase image quality without the performance hit we would have had prior to the RTX cards, which is now a reality. Don’t base the small increase in quality on a badly patched rasterization engine, as once engines are natively taking advantage of RTX things should get a lot better.

If you really want to see an impressive increase in performance, try playing a game with RTX enabled on a GeForce GTX 1080 Ti – you will find you’re unable to. Even if the RTX 2080 Ti only gets 5 frames per second, it is still an infinite increase. We now have the potential for something that until just a few short months ago was deemed impossible in the near to mid future. We need to step back and appreciate the technology for what it is and the revolutionary (as opposed to evolutionary) change it can bring. When a baby takes its first steps the parents don’t judge it for being slow and unstable, but are more likely impressed that their little one is making such good progress. The same should be thought of technology, and that includes NVIDIA’s RTX.