While we’ve seen techniques similar to FSR 2.0 in the console space for many years, the quality is such that it can rightfully take its place amongst the most recent second generation upscalers, such as Unreal Engine’s Temporal Super Resolution (TSR), as seen in UE5. While the likes of DLSS 1.0, checkerboard rendering, and older forms of TAA upscaling aim to produce native-like image quality at roughly half the internal resolution, FSR 2.0 and other second-gen techniques target similar quality from just a quarter of the resolution, with a 4K output from a 1080p base image often put forward as the sweet spot. DLSS and Intel’s upcoming XeSS utilise machine learning via specialised onboard hardware but FSR uses the compute power of the GPU itself, meaning that it should on run any modern graphics card. As you’ll see, we successfully ran FSR 2.0 on the venerable Radeon RX 580 and Nvidia’s GeForce GTX 1060. In terms of the options available, FSR 2.0 offers settings very similar to both FSR 1.0 and DLSS. There are three modes available: performance, balanced and quality. Using 4K as an output resolution, performance and quality modes use the same internal resolution as DLSS: 1080p and 1440p respectively. Meanwhile, the balanced mode (2259x1270) uses a slightly higher resolution than DLSS balanced mode (2227 x1253). Generally, in common with all image reconstruction techniques, the higher the internal resolution, the greater the output quality. So, how does that quality shape up? There’s a lot to get through here, and I recommend watching the video for the whole picture. I put FSR 2.0 up against DLSS 2.3 and native resolution rendering across a number of scenarios. I selected test cases that historically challenge reprojection technologies encompassing static views, motion, sub-pixel detail, non-opaque geometry (eg foliage) and animation. The tests are exhaustive and worth watching closely, though there are some screenshot comparisons on this page to show some of my working and insights. To cut to the chase, FSR 2.0 is similar to DLSS 2.3 in that it can actually look better than native resolution rendering in some scenarios: in particular, a 4K output with quality mode (internal resolution: 1440p) generally looks impressive. The most challenging scenarios, however, do tend to reveal more artefacts than the Nvidia equivalent. Also, the more aggressive the performance mode chosen, the more impactful those artefacts are. However, we are talking about relative quality here: the fact is that viewed in isolation, FSR 2.0 in 4K performance mode rendering internally at just 1080p still looks pretty impressive - a credit to AMD. A further caveat comes into play though: quality on lower output resolutions. If you’re targeting a 1080p or 1440p output, the reduction in source data to work with can again cause a hit to the detail level. As a rule of thumb, with DLSS, the lower your display resolution, the higher the DLSS quality level you should use - this is even more important with FSR 2.0. It’s also worth noting the performance considerations too. As the table on this page reveals, the cost of FSR 2.0 when viewed in isolation sees the technique running faster on Nvidia GPUs than AMD hardware in almost all scenarios. Furthermore, DLSS 2.3 is delivering further quality with a lower rendering cost on RTX cards. Also, the wider the gulf between the internal rendering resolution and the output resolution, the higher the processing cost of FSR 2.0. This is not so much of an issue because you are saving so much more GPU time simply because you are operating at a lower internal resolution in the first place - but there does come a point where, say, pushing a lower-power 1080p-focused GPU to output 4K via FSR 2.0 essentially becomes a waste of time. Ultimately, the conclusions I draw from my testing are fairly straightforward. FSR 2.0 at 4K is quick on big GPUs and slow on smaller GPUs, but perhaps more realistically, lower-end hardware works relatively efficiently in upscaling to 1080p or 1440p resolution. Looking at performance more specifically, FSR 2.0 is designed to offer similar improvements to performance as DLSS, with minimal impact to image quality. For example, in Deathloop, I found that the Radeon RX 6800 XT runs below 60fps at max settings with RT enabled at native 4K. FSR 2.0 in quality mode improves frame-rates by 54 percent, increasing to 92 percent in performance mode, both allowing for great 60fps (or even higher) experiences. At the lower end of the hardware scale, an RX 580 outputting at native 1080p cannot run the game at max settings with a consistent 60fps. The older FSR 1.0 in performance mode increases performance by 44 percent, but in terms of quality, it leaves a lot to be desired. Meanwhile, FSR 2.0 produces a far superior output image, though the performance uplift isn’t quite so high - but a 38 percent boost is still impressive and it’s still a powerful tool in reaching 60fps. Again, I do recommend watching the video to get a better idea of the nuances of FSR 2.0 but the improvement over its predecessor is huge. FSR 1.0 delivered improved results over the most basic of upscalers but fared poorly against reconstruction-based techniques, up to and including DLSS. However, FSR 2.0 is a resounding success based on what we’ve seen so far. Implementations and quality might vary from title to title - as we have seen with DLSS - but not only has AMD delivered a great piece of tech, it has managed to offer up quality levels better than other software-based solutions and comparable to the best, including Epic’s TSR. And if it cannot fully match DLSS, perhaps it does not need to - subjectively it still looks very good, and that’s all it needs to deliver, since machine learning-based techniques demand a certain type of graphics card. So yes, it runs on a far wider range of hardware than DLSS, therefore for those who do not have an RTX card, FSR 2.0 is a great thing to use. It’s also the first iteration of the technology, so there is every chance that the areas of weakness we did find could be improved via game patches or future versions of the algorithm. Take-up from developers will be key, but ultimately, the inputs it requires are similar to DLSS and XeSS, so adoption should be similarly rapid and I look forward to seeing how AMD’s techniques work on future titles.