EVGA RTX 2080 Super XC Hybrid Review: Cool Running, but Worth It?

Over the past few months, we’ve looked at several Turing based Nvidia cards, which are some of the best graphics cards available and often rate near the top of our GPU hierarchy. None of them have been quite like the EVGA RTX 2080 Super XC Hybrid, which consists of both an air cooler and an integrated liquid cooling solution to keep the GPU and memory frosty. In theory, that should keep temperatures down and allow boost clocks to run a bit higher than most air-cooled solutions while potentially cutting down on noise.

The EVGA RTX 2080 Super Hybrid XC we have for review comes with a small factory overclock (+15 MHz boost over reference speeds) on the core and the same 1,938 MHz (15.5 Gbps) clock speed on the RAM. We’ll see how this card and its hybrid cooling solution compares to the RTX 2080 Ti Founders Edition (FE), Asus ROG Strix RTX 2080 Super OC, RTX 2080 Super FE, and an RTX 2070 Super FE from the Nvidia side. Representing AMD in this review is an ASRock RX 5700 XT Taichi, Radeon VII, and reference RX 5700 and 5700 XT.

While performance was similar between the EVGA Hybrid XC and the other 2080 Super cards, the differences between them are found in the cooling solutions, features and price. At the time of this writing, both the EVGA RTX 2080 Super Hybrid XC and the Asus ROG Strix are priced around $780 while the Nvidia RTX 2080 Super Founders Edition normally goes for $700 (assuming you can find it in stock). ASRock’s RX 5700 XT Taichi is priced at $439.99 by comparison, and the RTX 2070 Super nominally goes for $500 (again, when it’s in stock).

For this review, we’ll look at performance differences in frames per second (fps), how the hybrid cooler performed compared to traditional air-cooled cards, power consumption, and overall value.

Features

Though there are some differences between the 2080 Super and other RTX models (such as the SM count and subsequently shaders, TMUs, and ROPs), all RTX 2080, 2080 Super, and 2070 Super cards sport the same Turing TU104 silicon under the hood. The TU104 die is manufactured on TSMC’s 12nm FFN (FinFET Nvidia) process using 13.6 billion transistors on a 545mm square die. All three TU104 cards come with 8GB of GDDR6 sitting on a 256-bit bus. 

Clock speeds on the EVGA RTX 2080 Super Hybrid XC are set at 1,650 MHz base clock with a listed boost clock of 1,830 MHz. Unlike AMD cards, the Nvidia GPUs’ listed boost clocks are more of a minimum whereas AMD boost clocks are a maximum (they run a lot closer to the Game Clock). Memory speed for the card is set to 1,938 MHz (15.5 Gbps effective) which yields 496.1 GB/s bandwidth. This configuration is enough for gaming at its target resolutions of 2560×1440 or 3840×2160 (4K UHD).

Nvidia lists the Geforce RTX 2080 Super as a 250W card in Founders Edition form and recommends a 650W power supply. EVGA doesn’t change the power draw or power supply recommendation for the Hybrid XC. 6-pin and 8-pin PCIe power connectors are required.

The following table summarizes the specifications of the Nvidia GeForce RTX 2080 Ti, RTX 2080 Super and RTX 2080 Founders Editions, along with the EVGA and Asus RTX 2080 Super cards.

Design

The EVGA RTX 2080 Super Hybrid XC is a dual-slot video card measuring 10.4 x 4.4 x 1.6 inches (265.6 x 111.2 x 40 mm). The card is full-size and sticks out about an inch past the width of our ATX motherboard, making this a long card though certainly not the biggest we’ve tested. Height shouldn’t be a concern as it sits flush with the I/O plate, but due to the length, it may not fit into small form factor (SFF) systems. Be sure to verify the space inside your chassis before buying.

Like air-cooled cards, our Hybrid cooled EVGA uses a stylish plastic shroud with the GeForce RTX 2080 Super name stenciled across its smoked tinted cover. On top is “EVGA Hybrid” naming lit up through the only RGBs we find. On the right is a single ~85mm fan attached directly to a large heatsink, which blows cool air across the VRMs and out the rear and I/O side. The back of the card sports a matte black powder-coated metal backplate that protects the rear of the PCB and helps passively cool the VRMs.

In order to cool the GPU and memory, a 120mm radiator attaches directly to the GPU core while a dedicated memory plate makes direct contact with the water cooling block for optimal memory temperatures. Attached to the high fins per inch (FPI) radiator is a swappable 120mm fan that does a good job moving air quietly. Both fans have their own custom curves and can be controlled manually through EVGA Precision X1 software. The VRM fan remains off during idle and low load operation, helping to minimize noise.

The EVGA RTX 2080 Super Hybrid XC uses a reference PCB sporting an 8+2 phase (GPU and Memory) VRM setup. The GPU VRM is controlled by an 8-channel uP9512P part, while the memory VRM is controlled by a uP9529P controller. The GDDR6 chips hidden below the memory plate are made by Samsung (P/N K4Z80325BC-HC16) and specified to run at 2000 MHz (16 Gbps effective). Even though this is a reference board, Nvidia did a good job ensuring the power delivery system could handle stock and overclocked operation.

I/O ports on the EVGA card are standard fare for high-end Turing and include three DisplayPorts (1.4b), a single HDMI (2.0b) port and a VirtualLink USB Type-C port designed for VR headsets.

How We Tested the EVGA RTX 2080 Super XC Hybrid

Our current graphics card test system consists of Intel’s Core i9-9900K, an 8-core/16-thread CPU that routinely ranks as the fastest overall gaming CPU. The MSI MEG Z390 Ace motherboard is paired with 2x16GB Corsair Vengeance Pro RGB DDR4-3200 CL16 memory (CMK32GX4M2B3200C16). Keeping the CPU cool is a Corsair H150i Pro RGB AIO, along with a 120mm Sharkoon fan for general airflow across the test system. Storing our OS and gaming suite is a single 2TB Kingston KC2000 NVMe PCIe 3.0 x4 drive.

The motherboard is running BIOS version 7B12v17. Optimized defaults were used to set up the system. We then enabled the memory’s XMP profile to get the memory running at the rated 3200 MHz CL16 specification. No other BIOS changes or performance enhancements were enabled. The latest version of Windows 10 (1909) is used and is fully updated as of March 2020.

Our GPU hierarchy provides a complete overview of the GPUs at the heart of various graphics cards and how the various models stack up against each other. For these individual third-party card reviews, we primarily focus on GPUs that compete with and are close in performance to the card that is being reviewed. However, we’ve overhauled our charting system and are including multiple other GPUs now. The main points of interest will be the RTX 2080 Ti FE, RTX 2080 Super FE, RTX 2070 Super FE along with an Asus ROG Strix RTX 2080 Super OC. For AMD, we have the Radeon VII, RX 5700 XT and RX 5700 reference cards, along with an ASRock RX 5700 XT Taichi.

Our list of test games is currently Borderlands 3, The Division 2, Far Cry 5, Final Fantasy XIV: Shadowbringers, Forza Horizon 4, Metro Exodus, Red Dead Redemption 2, Shadow of the Tomb Raider and Strange Brigade. These titles represent a broad spectrum of genres and APIs, which gives us a good idea of the relative performance differences between the cards. We’re using driver build 442.78 for the Nvidia cards and Adrenalin 20.4.1 drivers for AMD.

We capture our frames per second (fps) and frame time information by running OCAT during our benchmarks. For The Division 2 and Metro Exodus we use the .csv files the built-in benchmark creates. For clock and fan speed, temperature and power, we use GPU-Z’s logging capabilities. We’ll be resuming our use of the Powenetics-based system for graphics card reviews in the near future. 

Looking at the 1440p ultra results, we see the EVGA RTX 2080 Super XC Hybrid averaged 105.8 fps across all nine games. The air-cooled Asus ROG Strix 2080 Super OC averaged 105.9 fps while the RTX 2080 Super Founders Edition averaged 104.9 fps—a virtual tie in overall performance. Every game averages more than 60 fps, and several are over 100 fps, allowing for high refresh gaming at these settings.

AMD’s current flagship Radeon RX 5700 XT holds its own, managing 90 fps on average with the factory overclocked ASRock EX 5700 XT Taichi, and 86.8 fps with the reference 5700 XT. Both are well behind any 2080 Super, but they’re nearly tied with AMD’s slightly older Radeon VII. Obviously, AMD’s RX 5700 series competes on price and not just raw performance, and until Big Navi arrives this is AMD’s best solution.

We should also note that all GPUs are tested using DirectX 12 in Borderlands 3, The Division 2, Metro Exodus, and Shadow of the Tomb Raider—and the Vulkan API for Red Dead Redemption 2 and Strange Brigade. We mention this because in many games that have DirectX 11 as an option, they actually run slightly better on Nvidia cards in that mode. Borderlands 3 in particular is typically around 10% faster in DX11 on Nvidia GPUs at 1440p and 4K, but even Division 2 and Metro Exodus see a modest 3-5% uplift. And of course, for games like Metro Exodus and Shadow of the Tomb Raider, DX12 is required if you want to use ray tracing on an RTX card.

We felt it best to standardize on one API for each game, as it helps simplify testing. Otherwise, we’d need to check performance in every API offered for each game to determine the best choice for each GPU at each resolution and setting. That increases the number of test runs and thus testing time by 66%, plus we’d need a lengthy explanation saying which API was used on each GPU at each setting. Besides, long term we expect more games to begin using DX12 and/or Vulkan exclusively, especially once the next generation of consoles arrive with ray tracing hardware support.

Moving up to 4K, we chose to use the ultra results as the EVGA 2080 Super XC Hybrid (and the other 2080 Supers) averaged over 60 fps in our tests — albeit by a slim margin. The EVGA card reached 62 fps, the Asus 61.6, and the Founders Edition at 61.2 fps. Again we see very little difference between the three 2080 Super cards, but average fps doesn’t tell the whole story. 

Of our nine games, five of them, Shadow of the Tomb Raider (55.4 fps), Red Dead Redemption 2 (46.5 fps), Metro: Exodus (42.1 fps), The Division 2 (46.2 fps) and Borderlands 3 (38.9 fps) were below 60 fps. You can likely have a smooth gaming experience on SOTTR (without ray tracing), but other games would require a reduction in image quality for the best results.

We use GPU-Z logging to measure each card’s power consumption, temperatures and fan speeds with the Metro Exodus benchmark running at 2560×1440 using the default Ultra settings. The card is warmed up prior to testing and logging is started after settling to an idle temperature (after about 10 minutes). The benchmark is looped a total of five times, which yields around 10 minutes of testing. In the charts, you will see periodic blips in power use that are a result of the benchmark ending one loop and starting the next.

We also use FurMark to capture worst-case power readings. Although both Nvidia and AMD consider the application a ‘power virus,’ or a program that deliberately taxes the components beyond normal limits, the data we can gather from it offers useful information about a card’s capabilities outside of typical gaming loads. For example, certain GPU compute workloads including cryptocurrency mining have power use that can track close to FurMark, sometimes even exceeding it.

Power Draw

Starting off with the gaming power charts, the EVGA RTX 2080 Super XC Hybrid averaged 238W, which puts the card between the Asus ROG Strix at 246W and the RTX 2080 Super FE at 232W. These results fall where expected. Users will be hard-pressed to notice a difference in their electric bill between them. 

Looking at the Furmark results, the differences between the cards shrank dramatically with the EVGA and Asus both using 247W while the Founders Edition averaged 248W. This tells us all of these cards have a power limit around the 250W mark, as expected. 

Temperatures

Temperatures during game testing reached a peak of 53 degrees Celsius on the EVGA Hybrid card which is 9 degrees Celsius less than the Asus ROG Strix (62C) while the Founders Edition peaked at the highest temperature of 72C. In this short testing, the 120mm AIO did a great job on the GPU temperatures, the lowest we’ve recorded for this test, in fact. Because of the lower temperatures, the EVGA card is able to maintain higher boost bins, as we’ll see below. 

Temperatures when running Furmark weren’t much different than during the gaming test. Nvidia cards tend to throttle hard when running this stress test so temperatures tend to be similar. The EVGA Hybrid peaked at 53 degrees Celsius, the Asus at 61C, and the Founders Edition at 75C. All coolers are able to keep the cards running here, but the AIO on the EVGA card is again the coolest by far. 

Fan Speeds

Fan speeds for the EVGA cover the fan running on the card cooling the VRMs. In this case, it peaked around 1,700 RPM in this testing while the Asus ROG Strix and Founders Edition cards ran faster at 1,750 MHz and 2,000 RPM peaks. The loudest part on the card is the pump, as both the onboard fan and the one attached to the radiator stayed quiet throughout. 

Like temperatures, not much changed with fan speeds either. All card fan speeds ramped up smoothly trying to manage thermals. The EVGA Hybrid’s VRM cooling fan reached around 1,700 RPM, the slowest of the test cards. The fan on the radiator peaked at 1,600 RPM with both operating quietly throughout testing. The pump and its distinct hum will be heard over the fans. 

Clock Rates

Clock speeds for the EVGA Hybrid card averaged the highest in our game tests at 1,993 MHz. Compare this to the Asus ROG Strix at 1,988 MHz—only 5 MHz behind. While that isn’t much, the Asus card is supposed to run 30 MHz faster according to the boost clocks. But since the hybrid card kept the GPU cooler than the air-cooled Asus, it reached higher overall boost clocks. 

Clock speeds during Furmark dropped tremendously on these Nvidia video cards as is normally the case. Here the EVGA hybrid averaged 1,819 MHz while the Asus ROG Strix used in the test was notably lower at 1,776 MHz. Again we see the benefits of the cooler running EVGA card able to maintain higher clocks. These workloads aren’t typical, so users can expect clocks speeds during gaming to be closer to the game test. 

EVGA includes its own software for monitoring and tweaking video cards, called Precision X1. The application is able to control the video card from fan speeds to clock speeds as well as monitoring capabilities. Users are able to overclock manually or using the built-in ‘VF Tuner’ to automatically search for a stable overclock as well as set up manual/static fan speeds and create custom curves.

One unique feature within Precision X1 is the boost lock function that locks the clock speeds to the boost clock regardless of external factors — it’s like overclocking the old school way (without boost affecting things). It can also be useful if you want to test a GPU at a static clock speed to compare performance with other GPUs.

EVGA Precision X1 works well for its intended purposes, offering users a complete application for monitoring and controlling both EVGA and other card partner video cards. Along with MSI Afterburner, it’s one of the two best GPU overclocking and monitoring utilities around.

EVGA’s RTX 2080 Super XC Hybrid proved to be more than capable at 1440p using ultra settings and 4K at high to ultra settings in most games. Compared to the RTX 2080 Super Founders Edition and Asus ROG Strix RTX 2080 Super OC, the integrated AIO kept the core temperatures much cooler. Since the hybrid card runs around 10C cooler than the others, it boosted higher than even the Asus, which is rated 30 MHz higher out of the box.

However, even with the differences in boost clocks and cooling, all three 2080 Super cards performed nearly the same, with less than 1 fps between them. The lower temperatures can be great for hotter climates and part longevity, or if you want to manually overclock, but out of the box performance isn’t going to vary much between the various RTX 2080 Super models.

Although the EVGA RTX 2080 Super XC Hybrid was generally quiet, the pump makes a distinct hum that’s different from the fans on the card and radiator, as well as any chassis fans. This is typical of most liquid cooling solutions and inherent to pumps in general. We didn’t find the sound loud or off putting, but it was noticeable over the other PC fans. The EVGA card’s fans meanwhile were quiet throughout testing.

For many users, the price for a graphics card is a huge factor, and none of the RTX 2080 Supers are anywhere close to being affordable. The EVGA RTX 2080 Super Hybrid XC and the Asus ROG Strix 2080 Super both cost around $780, while the Nvidia RTX 2080 Super Founders Edition can often be found for $700 (at least, it can be when COVID-19 is screwing up the supply chain). The least expensive RTX 2080 Super we can find right now is this PNY RTX 2080 Super blower, priced at $694 — and unless you really like blowers, we’d recommend paying $6 more for the Founders Edition or something like the MSI Ventus XS OC. Sticking with hybrid cooling cards, MSI’s RTX 2080 Super Sea Hawk X lists for $734.99, while the Gigabyte Aorus Waterforce RTX 2080 Super sports a large 2x120mm radiator and goes for $839.99. 

One thing you really need to consider when buying a hybrid cooled graphics card is the size and placement of both the card and the radiator. Most mid-size and larger ATX cases should be able to accommodate both, but smaller cases can be a problem. Not to mention, the radiator is one more thing to deal with when cleaning out your PC or swapping components.

The EVGA RTX 2080 Super XC Hybrid proved to be a capable card for both 1440p and 4K gaming, though the latter in some titles will require tuning of settings. The hybrid cooler with a 120mm radiator did a great job keeping the GPU cool during testing. However, even though we saw higher boost clocks, the performance gained from the small difference ended up being negligible. Priced between other hybrid cards, the EVGA XC is a viable solution. Just come for the thermals rather than the out of box performance.

HP V6 2x8GB DDR4-3200 Review: Old Hand Or New Player?

The HP brand may be a household name, but anyone deep enough into PC hardware to read enthusiast publications probably doesn’t associate the company with the memory market. Most of us associate HP name with expensive ink cartridges and an ever-rotating array of notebooks. While the greatest benefit of being associated with a well-known 81-year-old company might be the ability to draw in neophytes, those researching and buying enthusiast tech like high-end desktop memory often pay little attention to marketing. So, what is the tech?

Available at data rates ranging from DDR4-2666 to DDR4-3600, each HP V6 memory kit includes one or two DIMMs, a product ID insert with installation guide and RMA contacts, and a lengthy multi-language regulatory compliance sheet. Heat spreader color is said to identify each kit, with red for DDR4-2666 and blue for DDR4-3000. But HP confuses the issue by cloaking both its DDR4-3200 and DDR4-3600 in black.

A look at the back of the package indicates that these modules can be found on the main webpage of HP Development Inc., but that it’s also manufactured and distributed under license. Sure enough, our search of the HP website yielded no information about this memory. 

HP business partner Multipointe Channel Solutions (MCS) has been marketing HP-branded SSD’s for a while now, but doesn’t talk about its own history on its own website. Any customer assurance should come from the fact that the company is still in operation, rather than its ties to the HP name.

As MCS only offers 8GB modules, its P/N 7TE41AA#ABC DDR4-3200 dual-channel kit has a total of 16GB. XMP timings of 16-18-18-38 at 1.35V boost the kit past its basic DDR4-2666 configuration, but that relies on users to own an XMP-compatible motherboard to set XMP mode. This of course is how the entire performance memory market works, but it’s still something that uninformed buyers might not know.

One thing that stands out about HP memory in general is its 5-year warranty, which is substantially less than the lifetime warranties of others (though the meaning of “lifetime” can sometimes be sketchy). This gives rise to questions, like “how long will I use this kit, anyway?” Five years is likely enough coverage for many, especially given we’ve seen that most defective memory dies in less than five years and would thus be covered.

Comparison Hardware

Comparison kits for our testing rounds include our latest 2x 8GB review samples from Geil, OLOy, and Team Group, the later clocked at a higher DDR4-3600 XMP, tested on AMD’s Ryzen 7 3700X and MSI’s memory-mastering MEG X570 Ace, using Toshiba’s OCZ RD400 SSD and Gigabyte’s GeForce RTX 2070 Gaming OC 8G to reduce bottlenecks. 

Overclocking and Latency Tuning 

The HP V6 DDR4-3200 is the first 2x 8GB kit we’ve seen in a while that wouldn’t reach DDR4-3600, which means it won’t be represented at several of our alternative performance settings. We were however able to reduce its DDR4-3200 timings to 14-16-16-32 at 1T. 

Lowest Stable Timings at 1.35V (Max) on MEG X570 ACE (BIOS 1.20) 

Unfortunately, the rest of the kits reached at least DDR4-3600, and one even went to DDR4-4266. The fastest kit was also rated at a higher DDR4-3600 XMP. 

Benchmark Results and Final Analysis 

The HP V6 DDR4-3200 kit’s XMP bandwidth and latency are close to average in SiSoftware Sandra.

XMP gaming performance also looks good, and it even retained that performance for the most part at our lower frequency and latency settings.

The HP V6 DDR4-3200 also looks decidedly average in our timed tests, where less time means more performance. So, it should be a good deal if it’s priced well.

About that key pricing issue though: We weren’t able to find P/N 7TE41AA#ABC for sale anywhere, and instead had to price it as two P/N 7EH67AA#ABC single modules. To compete, a pair of these would need to be priced around 25% lower than two single modules. 

At its current price of about $53 per 8GB stick, this kit just isn’t competitive. That of course doesn’t mean it’s a bad product, just that there are plenty of better options available from companies more well-known in the memory market. Pricing could of course also change substantially, which could make this memory more competitive. But until it does, most consumers in the know should look to the plethora of competitors.

HP V6 2x8GB DDR4-3200 Review: Old Hand Or New Player?

The HP brand may be a household name, but anyone deep enough into PC hardware to read enthusiast publications probably doesn’t associate the company with the memory market. Most of us associate HP name with expensive ink cartridges and an ever-rotating array of notebooks. While the greatest benefit of being associated with a well-known 81-year-old company might be the ability to draw in neophytes, those researching and buying enthusiast tech like high-end desktop memory often pay little attention to marketing. So, what is the tech?

Available at data rates ranging from DDR4-2666 to DDR4-3600, each HP V6 memory kit includes one or two DIMMs, a product ID insert with installation guide and RMA contacts, and a lengthy multi-language regulatory compliance sheet. Heat spreader color is said to identify each kit, with red for DDR4-2666 and blue for DDR4-3000. But HP confuses the issue by cloaking both its DDR4-3200 and DDR4-3600 in black.

A look at the back of the package indicates that these modules can be found on the main webpage of HP Development Inc., but that it’s also manufactured and distributed under license. Sure enough, our search of the HP website yielded no information about this memory. It’s not until buyers purchase this memory, cut the seal and remove/unfold the face card that the distributor is revealed:

HP business partner Multipointe Channel Solutions (MCS) has been marketing HP-branded SSD’s for a while now, but doesn’t talk about its own history on its own website. Any customer assurance should come from the fact that the company is still in operation, rather than its ties to the HP name.

As MCS only offers 8GB modules, its P/N 7TE41AA#ABC DDR4-3200 dual-channel kit has a total of 16GB. XMP timings of 16-18-18-38 at 1.35V boost the kit past its basic DDR4-2666 configuration, but that relies on users to own an XMP-compatible motherboard to set XMP mode. This of course is how the entire performance memory market works, but it’s still something that uninformed buyers might not know.

One thing that stands out about HP memory in general is its 5-year warranty, which is substantially less than the lifetime warranties of others (though the meaning of “lifetime” can sometimes be sketchy). This gives rise to questions, like “how long will I use this kit, anyway?” Five years is likely enough coverage for many, especially given we’ve seen that most defective memory dies in less than five years and would thus be covered.

Comparison Hardware

Comparison kits for our testing rounds include our latest 2x 8GB review samples from Geil, OLOy, and Team Group, the later clocked at a higher DDR4-3600 XMP, tested on AMD’s Ryzen 7 3700X and MSI’s memory-mastering MEG X570 Ace, using Toshiba’s OCZ RD400 SSD and Gigabyte’s GeForce RTX 2070 Gaming OC 8G to reduce bottlenecks. 

Overclocking and Latency Tuning 

The HP V6 DDR4-3200 is the first 2x 8GB kit we’ve seen in a while that wouldn’t reach DDR4-3600, which means it won’t be represented at several of our alternative performance settings. We were however able to reduce its DDR4-3200 timings to 14-16-16-32 at 1T. 

Lowest Stable Timings at 1.35V (Max) on MEG X570 ACE (BIOS 1.20) 

Unfortunately, the rest of the kits reached at least DDR4-3600, and one even went to DDR4-4266. The fastest kit was also rated at a higher DDR4-3600 XMP. 

Benchmark Results and Final Analysis 

The HP V6 DDR4-3200 kit’s XMP bandwidth and latency are close to average in SiSoftware Sandra.

XMP gaming performance also looks good, and it even retained that performance for the most part at our lower frequency and latency settings.

The HP V6 DDR4-3200 also looks decidedly average in our timed tests, where less time means more performance. So, it should be a good deal if it’s priced well.

About that key pricing issue though: We weren’t able to find P/N 7TE41AA#ABC for sale anywhere, and instead had to price it as two P/N 7EH67AA#ABC single modules. To compete, a pair of these would need to be priced around 25% lower than two single modules. 

At its current price of about $53 per 8GB stick, this kit just isn’t competitive. That of course doesn’t mean it’s a bad product, just that there are plenty of better options available from companies more well-known in the memory market. Pricing could of course also change substantially, which could make this memory more competitive. But until it does, most consumers in the know should look to the plethora of competitors.

Intel Core i9-10900K benchmarks have leaked, and it’s still slower than the Ryzen 9 3900X

For what feels like ages at this point, we’ve been waiting for Intel 10th-generation Comet Lake processors for desktop to make their appearance. And, while we have heard plenty of rumors about when we’ll see them, we’re starting to see info suggesting what they’ll be capable of. 

The latest of these is a Geekbench 5 benchmark result spotted by renowned hardware leaker TUM_APISAK, and the results are pretty interesting. Notably, it lists the maximum frequency as 5.08GHz, which is lower than the 5.3GHz that previous leaks have suggested. This leads to a multi-core score of 11,296 which isn’t quite as powerful as AMD’s current-generation flagship. 

MSI Z490-S01i9-10900Khttps://t.co/wpnJTZocoZApril 9, 2020

We actually just retested the AMD Ryzen 9 3900X the night before this leak appeared, where the 12-core processor managed a score of 12,060, which makes it still around 7% faster than the alleged Intel chip’s result – keep in mind that the 3900X launched way back in July 2019, too. 

However, the AMD Ryzen 9 3900X does fall behind this leaked benchmark in single-core performance, scoring 1,268 points in last night’s testing compared to the 1,408 in this leak. That is a pretty substantial 10% lead that Intel is potentially claiming here, which would maintain its position as the manufacturer behind the best processors for gaming. 

Obviously, we can’t wait to get this little chunk of silicon in for our own in-house testing to see exactly how it stacks up against AMD, but we still have no idea when that will actually happen. Intel will launch its next-gen processors when it decides is the best time, and until then we’re just going to have to wait and see. 

A temporary fix?

Intel’s 10th-generation Comet Lake-S processors may narrow the massive gap that exists between AMD and Intel in the desktop world right now, but it may not last for long. Keep in mind that AMD CEO Lisa Su has said that Ryzen 4000 processors for desktop will be coming this year. 

If the Intel Core i9-10900K only manages to come 7% short of beating the 3900X and only beating it in single-core by around 10%, that doesn’t bode well for Intel whenever Team Red manages to launch its next desktop platform. Word on the street, according to an AdoredTV leak, is that the Zen 3-based Ryzen 4000 lineup is going to see a 15% boost in IPC performance. If that’s paired with higher clockspeeds on AMD’s next platform, Intel’s single-core lead could vanish. 

And now that we’ve seen AMD bring the Zen 2 improvements over to mobile, there’s a lot of pressure on Intel to come up with something truly exciting. We said it in another piece touching on our brief testing of the AMD Ryzen 9 4900HS (more on that coming very soon), but we’d love to see Intel come up with its own Ryzen moment. 

Intel Comet Lake-H has just arrived and Comet Lake-S is likely right around the corner, so we’re incredibly interested to see whether or not it can shake up AMD’s stranglehold on the processor world. 

And if it does, you can bet we’ll be diving into that when the time comes. 

MSI Clutch GM30 Gaming Mouse Review: Comfy RGB Pointer

When it comes to finding the best gaming mouse, what some require is optional for others. While the most hardcore gamers may seek a mouse sensor boasting the highest CPI counts, a braided cable and a pile of programmable buttons, mainstream or casual gamers can do with a little less.

The MSI Clutch GM30 (available for $50 – $60 as of this writing) isn’t quite entry-level but cuts costs with a lower CPI (counts per inch) count than some similarly priced rivals, as well as fewer buttons. But what it lacks there it makes up for with a fabulous design with RGB lighting and a build you’ll appreciate from your fingertips to your palm. 

MSI Clutch GM30 Design and Comfort

The Clutch GM30 isn’t the lightest gaming mouse around, especially when compared to first-person shooter-focused ones offering lightweight designs that make flinging it across your best mouse pad a breeze. Instead, the Clutch GM30 has a bit of weight to it at 3.46 ounces (without the cable) compared to the honeycomb-style Glorious Model D’s 2.4 ounces or even the Razer DeathAdder V2 (2.9 ounces). But it’s still on par with something like the HyperX Pulsefire Raid (3.35 ounces). Very competitive gamers seeking a lightweight design should look elsewhere, but others might appreciate the Clutch GM30’s substantive feel.

The matte black Clutch GM30 measures 5.03 x 2.01 x 1.38 inches (LxWxH), which is pretty standard (the Deathadder V is 5 x 2.43 x 1.68 inches). It’s the perfectly curved rear (hello!) that made it fit so nicely in the palm of my hand that I almost thought the mouse was made for me.My palm made a home there similarly to how my cheek does a well-fluffed pillow.

MSI targets the Clutch GM30 toward gamers with medium-sized hands who use palm or claw grips. Either grip style both offered me long-term comfort and easy accessibility to the Clutch GM30’s two programmable and polygonal side buttons.

Speaking of programmable buttons, there are six in total, including the left and right buttons, scroll wheel and CPI button south of the scroll wheel. The CPI button lives in a small channel that also makes a great spot for resting the index finger during long scroll sessions.

The sides of the mouse are also slightly comfortable resting places, thanks to double-injection soft rubber “dragon scale grips.” I’ve felt softer and cozier, but these areas are softer than the rest of the plastic mouse and have enough texture to prevent slipping. Plus, the “scales’” rigidness may help with durability over the months (although I’ve only had the mouse for about 10 days).

Meanwhile, the thick scroll wheel, with aggressive tire-like markings, has a forgettable feel that isn’t as slick or desirable as those of other mice I often use, such as the Cooler Master MM711 or even the non-gaming Microsoft Wireless Mouse 4000. There were no issues with the wheel’s stepped movements. But for heavy scrolling it’d be nice to have the option to switch to a smooth-gliding wheel, such as seen in the more expensive Razer Basilisk V2 gaming mouse ($80 at the time of writing) or productivity-focused Logitech MX Master 3 (about $100).

The MSI Clutch GM30 provides RGB fanatics with three independently-controllable RGB zones (more in the Features and Software section): the scroll wheel, the channel framing the CPI button and the Lucky Dragon logo kissing the palm. It’s nice for so much of that to be in between the left and right buttons, where they’ll usually remain visible. The mouse looks best with prismatic effects flowing from the top of the scroll wheel down to the dragon logo.

Unfortunately, the Clutch GM30’s wire isn’t braided and looks as vulnerable to damage as any other typical, standard cable. But at least its connector seems pretty solid with extra thick plastic and gold plating inside. Additionally, the cable has a plastic casing to ensure that that part of the cable stays 4.8mm (0.19 inch) off the desktop. You can slide that rubber bit up and down the wire with a good amount of effort.

After about 10 days of using this mouse regularly, I noticed a lot of dust gathering in cracks on the mouse’s underside, but you’ll rarely look there. And if you do, hopefully the additional dragon will garner most of your attention instead.

MSI Clutch GM30 Gaming Performance

The MSI Clutch GM30’s optical PixArt PAW3327 sensor fared well in the Nordlys story of Battlefield V. At work here is a sensor with polling rates of 125, 250, 500, or a speedy 1,000 Hz with 30G acceleration and a max tracking speed of 220 IPS. The CPI switch toggles between 400, 800, 1,600, 3,200 or 6,200 and was easy to swap through on the battlefield without straining my pointer finger, as I was easily able to reach into the groove where it lives with the tip of my finger or, more easily, the middle of the index. 

Other mice in this price range bring higher CPI counts (the DeathAdder V2 goes up to 20,000, and the PulseFire Raid goes to 16,000). But I could easily do rapid scans of the Norwegian battlegrounds and quickly stop to pinpoint an enemy’s small helmet. The mouse also kept up with my most erratic movements, such as jerking my head about to locate an enemy.

Another standout for gaming was the clickiness of the left and right buttons’ Omron switches. They’re supposed to be durable and last for over 20 million clicks. In our testing, they offered a snappy responsiveness that you don’t see with other gaming mice, such as the Cooler Master MM711. During rapid-fire attacks the two buttons felt as eager as I was to jump into action with audible, sure clicks to accompany the bangs of the MP40.

The scroll wheel offers line-by-line movement and doesn’t move far, even with my most powerful flicks. In games where I would do a lot of scrolling, like if I used it to spam a critical attack, it’d get a little tiring.

While gaming, it was easy to engage either of the two angular side buttons because my thumb was usually resting on them and they jut out sharply. If I had my way, the front one would be a slightly further back so that it’d be as easy to press as the back one.

The mouse’s cable never snagged during gameplay or work during my week-and-a-half with it. It’s purposely elevated 4.8mm, and the plastic casing really helped ensure that, plus I could slide that down to make sure the cable never became a drag.

MSI Clutch GM30 Features and Software

The Clutch GM30 is supposed to work with MSI’s Dragon Center app for controlling features like RGB lighting and programmable buttons. However, the software wouldn’t install properly at the time of writing. I reached out to MSI, which confirmed the issue. For now, that means the mouse has limited customization options, including, sadly over use of the six programmable buttons.

The pointer’s RGB lighting is still wowing me, but I don’t have easy control of its effects. Sans software, you can control the lighting effects by holding down the CPI switch and one of the other buttons. You can still change brightness (3 levels, plus off), switch lighting effects (9 modes, including steady, breathing, radar and whirlpool, plus off) and change speed, direction or color and fade off spee). Of course, none of this is as seamless — or seamless at all — without software.

Bottom Line

The MSI Clutch GM30 is a winner when it comes to design. Despite its dragon emblem and RGB lighting, it looks tasteful. More importantly, its well-curved design, snappy Omron switches, textured side grips and easily accessible buttons make long-term use — whether gaming or doing work — not only a breeze, but enjoyable.

Similarly priced rivals, such as the HyperX Pulsefire Raid ($60) and Razer DeathAdder V2 ($70), bring higher CPI counts and more buttons. But the average gamer will be able to navigate games well with the MSI Clutch GM30.

For a palm/claw grip companion that your hand will gravitate toward and good-looking RGB, the MSI Clutch GM30 is a well-priced choice. If only we could get some working software.

PNY XLR8 2x16GB DDR4-3200 Review: A Really Big Deal?

Any tech geek who’s been in a coma for the past decade would be excused for thinking of PNY as primarily a memory company, but its recent marketing efforts have focused primarily for storage and graphics cards. It’s not that the firm dropped out of PC memory, but that it appeared to be resting on its reputation as new kits were released. That’s about to change with its latest XLR8 modules.

The snazzy new kit comes in plain packaging at a surprisingly low price: Available from several sellers for around $155, the firm’s online store recently dropped its price from $145 to $140. That makes it $10 cheaper than the least-expensive previous 32GB kit we’ve tested.

Like every other DDR4-3200 kit we’ve tested, XLR8 part number MD32GK2D4320016XR uses Intel XMP overclocking technology to push mainstream DRAM ICs (memory chips) to its rated settings at 1.35V. Buyers who find that their board isn’t capable of XMP will find a highest non-XMP configuration of DDR4-2400, which is still a bit faster than the DDR4-2133 of most competitors. 

We’re fortunate enough to have a full range of 2x 16GB DDR4-3200 kits with 16-18-18-36 timings to compare, and have included the cheapest of those for this battle of value supremacy. Readers who’d like to understand a bit more about that data rate and timings should check out our PC Memory 101 feature. The test system uses AMD’s fast Ryzen 7 3700X to feed data through MSI’s memory-mastering MEG X570 Ace from Toshiba’s OCZ RD400 SSD, while Gigabyte’s GeForce RTX 2070 Gaming OC 8G pushes the pixels. 

PNY XLR8 ties HyperX Predator RGB for the worst overclocking headroom on our platform, though getting anywhere beyond rated settings is somewhat of a gift.

We didn’t find much flexibility in this XLR8 kit’s timings either, as dropping the speed from DDR4-3200 to DDR4-2933 allowed only one cycle of latency to be shaved off. But PNY could have other tricks up its sleeve. 

Not only did XLR8 have the worst overclocked performance results in Sandra, but even its XMP bandwidth came out slightly behind competitors. It did score the lowest XMP latency though, which could point to better-optimized advanced timings. 

The XLR8 also fell slightly behind in gaming performance, though by a far lesser amount than its price difference. Could this be a best value? 

XLR8 DDR4-3200 makes up for some of its earlier losses in our performance average by getting the best XMP-based 7-Zip encode time. 

Final Thoughts

Two of the kits in today’s comparison have RGB, which some users don’t want, and others are willing to pay a little for. None of that is considered in a basic performance-per-dollar chart.

Those who only want the best performance per dollar will find that the XLR8 delivers, providing the same overall performance as the second-cheapest kit while being priced up to $15 less. Given that big value surprise, we’re looking toward more offerings from PNY.

Save $40 on this overclocked Gigabyte Radeon RX 5700 XT graphics card

The Radeon RX 5700 XT holds up as one of the best graphics cards in the sub-$500 tier, and there are a handful of custom-cooled models that sell for below $400. Gigabyte’s Radeon RX 5700 XT Gaming OC is not typically one of them, though it’s on sale right now for $369.99 on eBay (via Newegg).

This card is normally priced at $409.99. Newegg is selling it for $40 below that price through its eBay account, and in the process is even undercutting its own store listing—you can save $10 on the same card on Newegg’s website by applying coupon code VGAPCRW4742 at checkout, but that only brings the price down to $399.99. So, you still come out ahead by going through eBay.

My only reservation about buying a graphics card right now is that both AMD and Nvidia are expected to launch new models later this year—RDNA 2 and Ampere, respectively. You will have to weigh the pros and cons of playing the waiting game.

If you need an upgrade right now, however, this sale price can help stave off buyer’s remorse when the newer and shinier stuff arrives. The 5700 XT is a good choice for 1440p and 1080p gaming. It’s generally faster than a GeForce RTX 2060 Super, and if you’re okay with dipping below 60fps, it can drive a 4K resolution in some games as well (it average around 50fps in the 11 games we used to test the card in our review).

This card comes with a mild factory overclock—Gigabyte goosed base clock to 1,650MHz (up from 1,605MHz) and reports the ‘game’ clock at 1,795MHz (up from 1,755MHz). It also wields a custom triple-fan cooling solution to keep temps in check.

Sabrent Rocket NVMe 4.0 M.2 SSD Review: A High-Performance Value

Sabrent has a hot seller on its hands right now, and for good reason. The company’s Rocket NVMe 4.0 is cooked up with the same ingredients as the other Gen4 SSDs on the market so far. This means it’s packing Kioxia’s latest 3D TLC NAND and is powered by none other than Phison’s PS5016-E16 NVME SSD controller. And, while fairly expensive per GB, Sabrent’s Rocket NVMe 4.0 is priced it well under most high-end competitors, making it one of the best bang-for-your-SSD -buck Gen4 drives yet.

Just note that Sabrent’s warranty policy will only cover the Rocket NVMe 4.0 for up to 1 year if you do not register the SSD within 90 days of purchase. But, if you do, you will receive a longer 5-year warranty instead. That’s a small price to pay for a lower price on checkout.

While you have to manually register your Sabrent’s Rocket NVMe 4.0 for its full warranty, you shouldn’t ever have to worry about the device’s endurance. With class-leading endurance ratings, our sample is covered to withstand up to 3,600TB of writes within the warranty period.

It comes in an M.2 2280 form factor and is available in three capacities: 500GB, 1TB, and 2TB. In terms of price, the drive is hard to beat within its niche; it undercuts most other Gen4 SSDs out there. The 1TB and 2TB capacities are rated to hit sequential speeds of up to 5/4.4 GBps and up to 750,000 IOPS, and the smaller 500GB model’s write speed peaks at 2.5 GBps, along with lower peak random performance.

Software and Accessories

Sabrent’s Rocket NVMe 4.0 comes supported by a few pieces of software. You get a free OEM copy of Acronis True Image. If you have any issues cloning due to the device’s sector size, there is Sabrent’s Sector Size Converter (SSC) which will allow you to change between 4K and 512e sector sizes for compatibility in that case. Additionally, Sabrent provides a Control Panel application, an SSD Toolbox which you can use to monitor the device and upgrade the firmware if an update is ever released.

A Closer Look

We have to give kudos to Sabrent on the black PCB and very attractive label design. The copper label looks nice and helps to aid in cooling, but on our 2TB sample, it may not be enough to prevent throttling under heavy loads. We will explore this more later on.

At the heart of the SSD is the Phison PS5016-E16 PCIe 4.0 x4 NVMe 1.3 SSD controller. Built on a 28nm process node and featuring dual Cortex R5 CPU cores with dual co-processors (dubbed CoXProcessor 2.0), the overall design is similar to the Phison’s E12. The main difference between the two is not only the PCIe Gen4 PHY but additionally, it boasts Phison’s updated 4th Gen LDPC ECC engine. It utilizes a DRAM caching architecture to maintain strong performance under heavy workloads. Our 2TB sample features two 1GB SK Hynix chips for the task of FTL table mapping.

It also supports thermal monitoring, TRIM, and the Format NVMe command to securely wipe data. Plus, it has end-to-end data protection to keep data safe and power management support for better efficiency.

Also, the device’s Kioxia’s BiCS4 96L TLC, which means our Rocket NVMe 4.0 sample is utilizing thirty-two 512Gbit NAND dies spread out into the four NAND packages on the PCB. And the drive has 9% of the NAND set as over-provisioning space to optimize garbage collection.

Comparison Products

We put Sabrent’s Rocket NVMe 4.0 up against quite a few high-end competitors. Intel’s Optane SSD 905P is by far the most expensive, but offers the lowest random latency of the bunch and doesn’t slow down due to garbage collection. We also threw in the Samsung 970 Pro and Samsung 970 EVO Plus and Adata’s XPG SX8200 Pro, one of our favorite SSDs for the price.

Additionally, we threw in Patriot’s Viper VPR100, which utilizes Phison’s E12 NVMe controller and the Viper VP4100, which has a Phison E16 controller powering it. For reference, we also added in the Intel SSD 660p, featuring cheap QLC NAND flash, as well as Crucial’s MX500 and WD’s Black hard drive, both SATA based.

Game Scene Loading – Final Fantasy XIV

The Final Fantasy XIV StormBlood and Stormbringer are two free real-world game benchmarks that easily and accurately compare game load times without the inaccuracy of using a stopwatch.

Sabrent’s Rocket NVMe 4.0 is It is significantly faster than an HDD, but it falls near the end of the pack with some of the slowest times out of the SSD bunch. However, the difference is only a few moments and the E16 powered Rocket NVMe 4.0 is faster than the E12-powered Viper VPR100.

Transfer Rates – DiskBench

We use the DiskBench storage benchmarking tool to test file transfer performance with our own custom blocks of data. Our 50GB data set includes 31,227 files of various types, like pictures, PDFs, and videos. Our 100GB includes 22,579 files with 50GB of them being large movies. We copy the data sets to new folders and then follow-up with a reading test of a newly written 6.5GB zip file, 8GB test file, and a 15GB movie file.

When it comes to moving around moderate-sized folders, the Sabrent Rocket NVMe 4.0 shows great performance. As well, it earns top ranks in the 100GB transfer and various large file read tests.

Trace Testing – PCMark 10 Storage Tests

PCMark 10 is a trace-based benchmark that uses a wide-ranging set of real-world traces from popular applications and common tasks to measure the performance of storage devices. The quick benchmark is more relatable to those who use their PCs lightly, while the full benchmark relates more to power users. If you are using the device as a secondary drive, the data test will be of most relevance. 

Trading blows with Viper’s VP4100, the other E16 contender and leading over any other NAND-based competition, Sabrent’s Rocket NVMe 4.0’s strong performance carries over to PCMark 10’s latest storage tests. Only the Intel Optane 905P can best the Phison-based drives in application requested tasks.

Trace Testing – SPECworkstation 3

Like PCMark 10, SPECworkstation 3 is a trace-based benchmark, but it is designed to push the system harder by measuring workstation performance in professional applications.

Completing SPECworkstation 3’s storage benchmark in just under 23 minutes, Sabrent’s Rocket NVMe 4.0 does quite well again. It is second only to the Intel Optane 905P and outperforms the Samsung SSDs as well as the Adata XPG SX8200 Pro. If you are currently using mechanical storage or even a SATA SSD for your professional workflow, this test shows why it may be time for an upgrade.

Synthetics – ATTO

ATTO is a simple and free application that SSD vendors commonly use to assign sequential performance specifications to their products. It also gives us insight into how the device handles different file sizes.

In ATTO, we tested Sabrent’s Rocket NVMe 4.0 at a QD of 1, representing most day to day file access at various block sizes. PCIe 3.0 SSDs tend to max out at about 3GBps in read/write, but with massive bandwidth available to it over the PCIe 4.0 bus, the Sabrent can hit higher highs. Reaching just under 5/4 GBps read/write, Sabrent’s Rocket NVMe 4.0 is capable of delivering over 15-18x the performance of the HDD. 

Synthetic Testing – iometer

iometer is an advanced and highly configurable storage benchmarking tool that vendors often use to measure the performance of their devices.

Under sequential reads and writes, the Sabrent Rocket NVMe 4.0 maxes out at about 5.0/4.3 GBps and peak random performance tops the competition at just about 600,000/550,000 IOPS read/write. At a QD of 1, Intel’s Optane 905P is in a league of its own when it comes to random performance and Adata’s XPG SX8200 Pro and Samsung’s 970 Pro are just a hair more responsive, and Sabrent’s Rocket NVMe 3.0 is still very competitive.

Sustained Write Performance, Cache Recovery, & Temperature

Official write specifications are only part of the performance picture. Most SSD makers implement a write cache, which is a fast area of (usually) pseudo-SLC programmed flash that absorbs incoming data. Sustained write speeds can suffer tremendously once the workload spills outside of the cache and into the “native” TLC or QLC flash. We use iometer to hammer the SSD with sequential writes for 15 minutes to measure both the size of the write cache and performance after the cache is saturated. We also monitor cache recovery via multiple idle rounds. 

When possible, we also log the temperature of the drive via the S.M.A.R.T. data to see when (or if) thermal throttling kicks in and how it impacts performance. Bear in mind that results will vary based on the workload and ambient air temperature.

Like other Phison E16-powered NVMe SSDs, the Rocket NVMe 4.0 features a write cache that absorbs inbound data at a very high-speed. But once it fills, performance temporarily degrades. Sabrent’s Rocket NVMe 4.0 wrote a bit less data than the Patriot Viper VP4100 we reviewed previously, reaching 669GB data written before its write performance tanked to about 540 MBps. Once you let it idle a bit, the cache will recover at a rate of about 16GB per 30 seconds.

Temperature-wise, even with the copper label, the 2TB model gets a bit hot under sustained writing. It can get into the 80-plus degrees Celsius range without enough airflow or motherboard heatsink and it will throttle. But, under most day to day use, temps will remain within the rated operating range.

Power Consumption

We use the Quarch HD Programmable Power Module to gain a deeper understanding of power characteristics. Idle power consumption is a very important aspect to consider, especially if you’re looking for a new drive for your laptop. Some SSDs can consume watts of power at idle while better-suited ones sip just milliwatts. Average workload power consumption and max consumption are two other aspects of power consumption, but performance-per-watt is more important. A drive might consume more power during any given workload, but accomplishing a task faster allows the drive to drop into an idle state faster, which ultimately saves power.

With this high-performance controller and 2TB of NAND flash to manage, our sample draws a lot of power. In testing, Sabrent’s Rocket NVMe 4.0 peaked at 7.38W but averaged a bit under the Samsung 970s. With a similar score as the Patriot Viper VP4100, the Rocket NVMe 4.0 places fourth place in our efficiency test. Overall, that makes it about 17 times more efficient than an HDD during file copying and over 90 times more efficient at idle, sipping just 66mW at its lowest idles state on our test bench.

Sabrent wasn’t a big name in SSDs until recent years. With the company’s SSDs packing Phison’s latest controllers, they score top regards by enthusiasts and gamers alike. Launched alongside the release of AMD’s Ryzen 3000 series, Sabrent’s Rocket NVMe 4.0 has been the company’s fastest drive yet. After months of sales, it has soared to the top as one of the best-value Gen4 SSDs available, with its low cost compared to the competition.

Offering up some incredible performance with the Phison E16 powering it, the Rocket NVMe 4.0 is a rocket for sure. Capable of delivering up to 5.0/4.4 GBps read/write in sequential transfers and peaking at almost 600,000/550,000 IOPS read/write, it is one of the fastest SSDs you can buy. It’s so fast, it even outperforms Samsung’s 970 EVO Plus and 970 Pro in various real-world and application testing, while being quite efficient.

Sabrent’s drive has the looks to match its performance, too. With a black PCB and well designed, sleek black and copper label, it’s one of the most aesthetically pleasing M.2 SSDs we have seen without a heatsink on top of it. And, with it being so slim, the Rocket NVMe 4.0 can easily fit underneath your motherboards built-in heatsink, if equipped. Also, with the amount of power the 2TB model can draw, we recommend doing so to keep temps tamed if you are going to be utilizing the drive for professional workflows.

With class-leading endurance ratings, Sabrent’s Rocket NVMe 4.0 isn’t going to wear out on you any time soon, either. Constantly moving around large video files or running various virtual machines, or even into benchmarking your hardware to death, the Rocket NVMe 4.0 will keep on going. Our main complaint against it is that you must register your SSD with Sabrent to receive the longer 5-year warranty, but that’s not exactly a huge hassle. Otherwise, Sabrent’s Rocket NVMe 4.0 is well worth your consideration if you are on the hunt for a new high-performance PCIe Gen4 SSD for a new build.

Gigabyte TRX40 Aorus Xtreme Review: Battle For Threadripper Supremacy

AMD’s Ryzen Threadripper processors don’t come cheap, so a race for the top in motherboards to support the platform was expected before the latest CPU series was even confirmed. Asus got to us first with its $850 ROG Zenith II Extreme, and it was only a matter of time before Gigabyte sent its own $850 entry. 

The biggest advantage we can find in TRX40 Aorus Xtreme specs is Intel’s 10GBASE-T network controller, which offers users a way to get 10GbE over copper (Cat 6A) cabling in addition to 5GbE, 2.5GbE, and standard Gigabit Ethernet compatibility over the same cable. That kind of interoperability explains why Intel calls this its Converged Network Adapter and moreover, it provides two of these 10GbE connections compared to the 10Gb/1Gb Ethernet set of the competing Asus product. 

Asus counters that single expensive feature with one of its own, a Gen 2×2 USB port that pairs two 10Gb interfaces over a single Type-C USB port. Yet while Gigabyte’s Type-C port makes do with a single 10Gb connection on each of its USB ports, the fact remains that is has eight ports at this speed, compared to Asus’ five.

The rest of the I/O panel is filled with a Q-Flash Plus button for firmware updates, Clear CMOS to erase custom firmware settings, a pair of antenna connectors for the semi-integrated Intel AX200 (2.4Gb/s) Wi-Fi module, a vent for the integrate voltage regulator fan, five analog audio jacks, and an optical S/PDIF output.

Gigabyte optimized its TRX40 Aorus Xtreme for four graphics cards at double-slot spacing, differing from the Asus board which has a single space between the second and third slots. While that might seem like a win for Gigabyte, moving the first slot to the case’s top position meant sliding up the DIMMs as well, which in turn limits the amount of space available for voltage regulator cooling. Heat is redirected to a second sink between DIMM slots and the I/O panel, where the cooling fan resides, via a thick heatpipe. 

The mounting depth concern we had with the competing board remains in the TRX40 Aorus Xtreme; its 10.8-inch width exceeds the available space of many high-quality ATX cases. And though this 10.8-inch board is rated as EATX, that’s far less than the limit of that specification, and many ATX cases with a bit more than 10.6 inches of mounting space still exist. So be extra sure about your case clearances before buying. 

One of the more annoying things about building with the TRX40 Aorus Xtreme is the number and variety of screws required to remove its M.2 covers. Another is that those covers are inseparable from the fan cover, making it less likely that builders will want to use heatsink-integrated M.2 drives. But removing those covers reveals one nice feature: four PCIe 4.0 x4 M.2 slots.

The forward-bottom M.2 slot steals I/O pathways from SATA, reducing the number of ports from ten to six, but the competing Asus board has similar sharing that reduces its ports from eight to four. And while the Asus board supports up to five M.2 drives thanks to its M.2 riser card, one of its onboard slots steals another four lanes from its eight-lane lower PCIe card slot. Gigabyte’s TRX40 Aorus Xtreme supports one fewer drive, but since its PCIe slots are always x16-x8-x16-x8 regardless of the number of drives installed…pick your resource-sharing poison.

Front-panel audio, dual ARGB and dual RGB, Thunderbolt add-in card and Trusted Platform Module headers are found beneath all those slots, along with switches to select between two firmware ROMs and enable or disable automatic backup BIOS implementation following a crash.

Many of the TRX40 Aorus Xtreme’s front-panel headers are hidden along the forward edge, and some require custom (included) breakout cables. From left to right are supplemental power for PCIe slots, two USB 3.0 headers, ten SATA ports (two from an AMS1062 controller), a custom four-port USB 2.0 header to mate with an included adapter cable, 24-pin power, a noise sensor header for an included internal microphone used in SPL-based fan tuning, a custom front-panel header for an included button/LED/PC Speaker breakout cable, and five of the board’s available seven fan connectors.

Two more fan headers are located between the onboard power/reset buttons and twin CPU power headers, while a USB3 Gen2 front panel header is located next to the two-digit diagnostics code display.

An Infineon XDPE132G5C drives sixteen core phases along the TRX40 Aorus Xtreme’s top edge, with another three phases on a separate controller behind the rear DIMM bank for CPU SOC. All phases use 70A, TDA21472 VR MOS. 

A light diffuser for front-edge RGB LEDs is sandwiched between the back of the TRX40 Aorus Xtreme’s circuit board and a steel back brace and secured via three screws. Four additional screws and standoffs support the rest of the brace farther back, and two more screws secure the factory-installed I/O shield to the brace. The brace also contacts the rear of the voltage regulator through a thermal pad, thereby adding heat dissipation to its functionality. 

Included in the retail box are the TRX40 Aorus Xtreme motherboard, a manual and two quick installation guides, custom breakout cables for USB 2.0 and front-panel button/LED/PC Speaker connections, a USB thumb drive with included drivers and applications, the AORUS Gen4 four-M.2 to PCIe x16 add-in-card adaptor previously described in our X299X Designare 10G Review, two Wi-Fi antennae, six SATA cables with braided sleeves, an internal microphone lead for SPL-based fan tuning, two RGB extension and two ARGB adapter cables, two Velcro cable ties, two thermistor cables, and a G-Connector front-panel button/LED/PC Speaker bundling block for which the function is already built into the custom front-panel breakout adapter. 

Gigabyte’s App Center is a central launch point for most of the firm’s included software, with added functionality including a few shortcuts to Windows settings and a software downloader with updater. When using the updater, be careful to assure that any unwanted freeware is manually deselected. 

Gigabyte @BIOS allows users to update or save firmware from within Windows, and even includes a utility to change the board’s boot-up splash screen. 

Gigabyte EasyTune worked well for changing clocks and voltage levels, but its automatic overclocking program only pushed our 3970X to 4.0GHz at 1.38V. We can do better manually. 

Clicking the little heart monitor icon in the lower-right corner of EasyTune brings up a Hardware Monitor menu on the right edge of the screen. We split that and put the halves side-by-side so it would fit into this image box. 

Hardware Monitor is part of Gigabyte’s System Information Viewer, so that clicking its return icon brings us here rather than back to EasyTune. After running a fan optimization test upon first use, users can choose a fan profile, configure their own, set system alarm levels, and log many of the stats displayed in Hardware Monitor. 

Gigabyte RGB Fusion lighting control software worked with its board, our memory, and our graphics card, for the most part. While many of the settings operated synchronously between all components, the program could not address wave (rainbow wave) mode on our DRAM unless we set these items asynchronously, and reverted to the former synchronous setting (or turned memory LEDs off) when we switched from the memory menu to another menu with memory set to “default” (rainbow wave). That still leaves a bunch of non-wave lighting patterns to choose from. 

Firmware

TRX40 Aorus Xtreme firmware defaults to its Easy Mode interface, but remembers the chosen UI from which the user last saved so that if you leave from Advanced Mode, you’ll return to Advanced Mode. 

The Tweaker menu from advanced mode let us set a stable 4.20 GHz CPU clock at 1.35V under load, but the way we got there was a little convoluted: After first setting “High” VCore Loadline Calibration within the CPU/VRM Settings of Advanced Voltage Settings, we gradually dropped the CPU VCore setting from 1.35V to 1.325V until it no longer overshot our desired voltage. 

The reason we didn’t try a lower VCore Loadline Calibration setting is that every time we adjusted the Loadline or CPU multiplier, the board would reset CPU voltage to stock. And, it wouldn’t show that change in settings, so we were left guessing, reconfiguring it, and rechecking it at the next boot. 

The TRX40 Aorus Xtreme has a complete set of primary and secondary memory timings to play with, along with advanced controls and even a menu that displays SPD and XMP configurations. We reached DDR4-4200 at 1.352-1.354V on our voltmeter, though getting there required us to set 1.34V within the Tweaker menu and it was displayed as 1.356V by firmware. 

The generically named Settings menu includes a very limited PC Health page plus Smart Fan 5 settings. Having said that, the Smart Fan 5 popup can be accessed from any menu simply by pressing the keyboard’s F6 function. Six of the fan’s headers can be controlled independently here, using the tuner’s choice of PWM or voltage-based RPM control. 

The System Info tab includes a Plug in Devices menus that shows the location of each detected device, and a Q-Flash menu for updating firmware. Checking the status of devices here can help one determine whether a device that dropped out of windows did so due to a Windows fault, or a hardware fault. 

Gigabyte’s TRX40 Aorus Xtreme takes on the top contenders from previous reviews, which include the $850 ROG Zenith II Extreme from Asus, MSI’s $700 Creator TRX40, and ASRock’s TRX40 Taichi. Gigabyte’s GeForce RTX 2070 Gaming OC 8G, Toshiba’s OCZ RD400 and G.Skill’s Trident-Z DDR4-3600 feed AM’s Ryzen Threadripper 3970X. Alphacool’s Eisbecher D5 pump/reservoir and NexXxoS UT60 X-flow radiator cool the CPU through Swiftech’s SKF TR4 Heirloom. 

The TRX40 Aorus Xtreme achieved a solid overclock for both our CPU and DRAM, but still came up a little shy compared to the ROG Zenith II Extreme. Memory data rate showed the largest difference, but only at a mere 66 MHz.

Gigabyte didn’t need that extra 66 MHz of data rate to rule the ROG Zenith II Extreme in memory bandwidth, and we might even be willing to recommend the board to overclockers had it not reset our CPU voltage every time we changed the CPU multiplier. 

While the TRX40 Aorus Xtreme had the best overclocked bandwidth in SiSoftware Sandra, it also had the lowest default overclock at standard XMP settings. Fortunately, its latency was still the best. 

Though the bars on the chart move around a bit, the TRX40 Aorus Xtreme’s strongest competitor in 3DMark and PCMark appears to be the $700 Creator TRX40. 

The TRX40 Aorus Xtreme sneaks past the ROG Zenith II Extreme in Ashes, but Asus returns the favor in F1 2017. We’re still not close to determining a true winner. 

Asus’ lower completion times in several workloads spell trouble for Gigabyte, but MSI tops both by a miniscule margin. 

Power, Heat and Efficiency

Even though it’s feature-heavy, the TRX40 Aorus Xtreme matches the TRX40 Taichi for lowest power consumption. Depending on average performance, that could prove favorable in our efficiency rating. 

The TRX40 Aorus Xtreme’s voltage regulator shows higher temperatures compared to rivals, but a look at its fan ramp shows that the fan doesn’t even kick on until it reaches 85 degrees C. Our test was conducted in a 21-degree room, so the fan wasn’t on. 

To assure readers that our readings were close to accurate, we attached four thermocouples to the back of the board, beneath its voltage regulator. Unfortunately, this means that our thermocouples were being cooled by the backplate, since the backplate cools the voltage regulator through a thermal pad. Gigabyte’s onboard thermistor placement appears far more ideal than any place we could find to stick our own devices. 

Given that the best location for our thermocouples was compromised by integrated cooling, we retaped our sensors in a nearby location where there was no backplate contact, behind the VR Chokes. Our thermometor’s readings increased by a few degrees but remain lower than the integrated thermistor, which is likely due to its greater distance from the heat source. 

While the TRX40 Aorus Xtreme wasn’t the top performer, it did beat its closest rival in our regular benchmarks and took second place overall. Moreover, the performance difference between all four boards was only 0.6%, and the TRX40 Aorus Xtreme’s relatively low power consumption gave it the top efficiency score. 

Conclusion

Value seekers love a chart that shows the board with the fewest features marching victorious over its rivals, but we perceive these as mere entertainment when onboard hardware is so disparate. 

The real rivalry here is between the TRX40 Aorus Xtreme and ROG Zenith II Extreme. And not only are these boards fairly close to each other in price, but also performance and feature value. But, the difficulty we faced in getting CPU overclocking adjustments to “stick” causes us to lean slightly toward Asus, though we could understand if someone else chose the TRX40 Aorus Xtreme for its superior network controller and PCIe slot arrangement.

We still have another $850 board, the Zenith II Extreme Alpha, to test. Will that model reveal itself the true leader?

ASRock RX 5500 XT Phantom Gaming D Review: Inexpensive, Well-performing

AMD’s RX 5500 XT release in December 2019 targeted the entry-level 1080p gaming segment and was, overall, received well by the public. In particular, the 8GB variants are enticing, as they don’t take the performance hit of the 4GB cards in certain titles, though for now the card hasn’t managed to break into our best graphics cards guide. That’s partly because budget cards are pretty far down the GPU hierarchy, with higher pricing than many competing cards. The ASRock RX 5500 XT Phantom Gaming D we’re reviewing comes with the full 8GB of VRAM, a factory overclock, an attractive price and a dual-fan cooling solution designed to keep the card cool and quiet while gaming.

Performance of the Phantom Gaming D was just where we expected it, competing with the other RX 5500 XT 8GB variants tested. It ends up faster than the GeForce GTX 1650 Super and slower than the Geforce GTX 1660. Compared to the other 8GB RX 5500 XT cards we’ve tested, the ASRock performed the same with less than 1% difference between them. The card averages almost 72 frames per second (fps) at 1080p using ultra settings across all games. Only Metro: Exodus and Borderlands 3 fell below the 60 fps threshold (37.7 and 42.9 fps, respectively). When lowering the settings to medium, the average increased to 102 fps and all titles were above the 60 fps threshold and ran smoothly.

At the time of writing, the ASRock RX 5500 XT Phantom Gaming D is $199.99 on Newegg, the least expensive 8GB card in this roundup. It also comes with the Resident Evil 3 remaster, Ghost Recon: Breakpoint and three months of Xbox Game Pass for PC. We pit the ASRock against Gigabyte’s RX 5500 XT Gaming 8G at $219.99, the Asus ROG Strix RX 5500 XT O8G Gaming for $229.99, and the 4GB Sapphire Pulse RX 5500 XT priced at $179.99. Between the 8GB cards, there is a $30 price difference while the 4GB model used for testing is $20 cheaper.

On the Nvidia side of things, the Zotac GTX 1650 Super has the lowest price at $159.99 while the Zotac GTX 1660 is $239.99, the most expensive card in this article. Worth noting is the GTX 1660 Super can be found for $229.99, and other GTX 1660 cards can be found starting at $209.99. We’ve also previously compared the Radeon RX 5500 XT vs. GeForce GTX 1660.

We’ll detail how the ASRock card performed against its peers and competition, how well it performed thermally, and other important details so you can make a more informed buying decision.

Features

All Radeon RX 5500 XT’s use the Navi 14 GPU and first-generation RDNA architecture. TSMC produced the 7nm die with 6.4 million transistors cut into a 158mm² area. This includes 1,408 shaders, 32 ROPs, and 88 TMUs across 22 Compute Units (CUs). Clocks speeds on the ASRock Phantom Gaming D are 1,737 MHz Game clock and 1,845 MHz boost clock—a 57 MHz increase over the reference clock speed (1,680 MHz) and the same as the Asus ROG Strix used here.

The 8GB of GDDR6 memory sits on a 128-bit bus and runs at 1,750 MHz (14 Gbps)—the standard speed for the Navi 14 GPU. This configuration yields 224 GB/s bandwidth, and the RX 5500 XT comes in 4GB and 8GB variants. Unless you plan to game at 1080p using reduced settings, you’ll want to get the 8GB over the 4GB cards. With VRAM needs increasing as time goes on, 4GB is now considered the minimum for most users while 6-8GB for those who would like to use ultra settings.

AMD’s RX 5500 XT’s Total Board Power (TBP) is listed at 130W and recommends a 450W power supply. ASRock, like most board partners, does not list the TBP for the Phantom Gaming D, though it raises AMD’s power supply recommendation of 450W up to 500W. Actual power use will vary between partner cards due to higher clock speeds and where the power limit is set. Feeding power to the card is a single 8-pin PCIe connector capable of delivering more power than this card will need, including any overclocking.

Additional specifications for each of the compared cards are listed in the chart below.

Design

The ASRock RX 5500 XT Phantom Gaming D is a two-slot video card measuring 9.5 x 5 x 1.6 inches (241 x 127 x 42mm). Though the heatsink extends past the PCB lengthwise, the card’s overall length should allow it to fit in most chassis, including some small form factor (SFF) builds. Be sure to verify the space inside your case before buying this or any other video card.

Covering the heatsink and surrounding the two 85mm fans (which have a 0db silent cooling feature) is a plastic shroud that fits with the ASRock Phantom Gaming theme, including black and red accents along with a faux brushed aluminum finish. The rear of the card is protected by a backplate, also matching the card’s theme, and doubles as a passive heatsink via thermal pads.

The Phantom Gaming D adds a bit of RGB flare as well with the Phantom Gaming name and symbol illuminated on the top of the card. For its size, the color is bright and saturated, though being so small it won’t take over the inside of your case.

In order to keep the card cool, ASRock uses a dual-fan setup along with a good size heatsink. The GPU die makes contact with the heatsink through a copper plate, which then sends the heat into the fin array via three large copper heatpipes. The heatsink cools all critical parts of the video card including the VRMs and memory, all of which connect to the fin array through an aluminum plate. 

The ASRock RX 5500 XT Phantom Gaming D3 routes power through a 6+1 phase VRM with the GPU and VRAM controlled by two OnSemi NCP81022 (4-phase) controllers. The GDDR6 chips on this card are made by Samsung and specified to run at 1,750 MHz (14 Gbps). This configuration will deliver plenty of clean power to handle both stock and overclocked operations. 

Outputs on the Phantom Gaming D are standard fare consisting of three DisplayPorts (1.4 with DSC 1.2a) and a single HDMI (2.0b) output. This should be plenty for most users. 

How We Tested the ASRock RX 5500 XT Phantom Gaming D 

Our current graphics card test system consists of Intel’s Core i9-9900K, an 8-core/16-thread CPU that routinely ranks as the fastest overall gaming CPU. The MSI MEG Z390 Ace motherboard is paired with 2x16GB Corsair Vengeance Pro RGB DDR4-3200 CL16 memory (CMK32GX4M2B3200C16). Keeping the CPU cool is a Corsair H150i Pro RGB AIO, along with a 120mm Sharkoon fan for general airflow across the test system. Storing our OS and gaming suite is a single 2TB Kingston KC2000 NVMe PCIe 3.0 x4 drive.

The motherboard is running BIOS version 7B12v16. Optimized defaults were used to set up the system. We then enabled the memory’s XMP profile to get the memory running at the rated 3200 MHz CL16 specification. No other BIOS changes or performance enhancements were enabled. The latest version of Windows 10 (1909) is used and is fully updated as of February 2020.

Our GPU hierarchy provides a complete overview of graphics cards and how the various models stack up against each other. For these individual third-party card reviews, we include GPUs that compete with and are close in performance to the card being reviewed. On the AMD side, we have the Sapphire Pulse RX 5500 XT, Asus ROG Strix RX 5500 XT O8G Gaming and the Gigabyte RX 5500 XT Gaming OC. Nvidia cards include the Zotac GTX 1650 Super and the Zotac GTX 1660 Amp. 

Our list of test games is currently Battlefield V, Borderlands 3, The Division 2, Far Cry 5, Final Fantasy XIV: Shadowbringers, Forza Horizon 4, Gears of War 5, Metro Exodus, Shadow of the Tomb Raider and Strange Brigade. These titles represent a broad spectrum of genres and APIs, which gives us a good idea of the performance differences between the cards. We’re using driver build 441.20 for the Nvidia cards and Adrenalin 2020 Edition 19.12.2 for AMD cards, although the 5600 XT was tested using 20.1.2 beta drivers.

We capture our frames per second (fps) and frame time information by running OCAT during our benchmarks. For clock and fan speed, temperature and power, we use GPU-Z’s logging capabilities. We’ll be resuming our use of the Powenetics-based system from previous reviews in the near future.

Beginning with the 1080p ultra results, the ASRock RX 5500 XT Phantom Gaming D averaged 71.9 fps across all titles. At these settings, all but three titles—Metro: Exodus (37.7 fps), The Division 2 (57.9 fps) and Borderlands 3 (42.9 fps)—are able to average at least 60 fps and provide a smooth gaming experience. All of AMD’s RX 5500 XT cards are capable 1080p ultra video cards, though some games will need to reduce settings to reach 60 fps.

Looking at the other RX 5500 XT cards in this review, the ASRock card is just as fast as the other 8GB variants—all averaging over 71 fps with the Asus O8G Gaming averaging 71.7 fps and the Gigabyte 71.3 fps. The Sapphire Pulse RX 5500 XT 4GB is well behind at 63 fps (or 13% slower) because some titles showed a severe performance drop due to the 4GB memory and PCIe 3.0 x8 configuration.

If we include the two Nvidia based GPUs, our ASRock review card is almost 4% faster than the much less expensive Zotac GTX 1650 Super (69.3 fps average), and over 6% slower than the slightly more expensive Zotac GTX 1660 Amp (76.6 average). Since these Turing based video cards do not support ray tracing or DLSS, the decision between some of these cards will come down to price, performance (both thermal and fps) and card features.

Staying at 1080p resolution but lowering the image quality settings to medium allowed all the games to reach over 60 fps. The ASRock Phantom Gaming D averaged 102 fps along with the Asus. The Gigabyte Gaming OC averaged 101 fps—all are within 1% of each other, which is basically the margin of error for our testing and wouldn’t be noticeable in gaming.

At these settings, the Sapphire Pulse didn’t choke on its 4GB of VRAM with medium settings and ended up only 4% behind. Most games were over 80 fps with a few (The Division 2, Strange Brigade, Final Fantasy XIV, Forza Horizon 4 and Battlefield V) averaging well over 100 fps. Dropping down to the medium settings shows a significant performance increase over ultra.

Performance differences between the ASRock Phantom Gaming D and the Nvidia cards are similar to the 1080p ultra results, with the GTX 1650 Super about 5% slower and the GTX 1660 almost 4% faster. This is a more CPU bound setting so the performance gaps tend to shrink at these settings compared to higher resolutions and image quality. 

We use GPU-Z logging to measure each card’s power consumption with the Metro Exodus benchmark running at 2560 x 1440 using the default ultra settings. The card is warmed up prior to testing and logging is started after settling to an idle temperature (after about 10 minutes). The benchmark is looped a total of five times, which yields around 10 minutes of testing. In the charts, you will see a few blips in power use that are a result of the benchmark ending one loop and starting the next.

We also use FurMark to capture worst-case power readings. Although both Nvidia and AMD consider the application a “power virus,” or a program that deliberately taxes the components beyond normal limits, the data we can gather from it offers useful information about a card’s capabilities outside of typical gaming loads. For example, certain GPU compute workloads including cryptocurrency mining have power use that can track close to FurMark, sometimes even exceeding it.

Starting with the gaming tests, the ASRock RX 5500 XT Phantom Gaming D averaged 114W, the most of all RX 5500 XT cards tested so far. The Asus was just behind it at 104W, the Sapphire Pulse at 102W, followed by the Gigabyte sipping power at 89W. As an end-user, you will be hard-pressed to see these differences on your power bill. We also need to be cognizant that our current recording method, GPU-Z only records the chip power and not Total Board Power (TBP) for AMD GPUs. This means actual power use is going to be a bit higher on these cards. 

The Zotac GTX 1650 Super averaged 97W—a few watts lower than most of the RX 5500 XT’s we’ve tested. The faster Zotac GTX 1660 Amp (not pictured in the chart) used even less power at 89W. This shows that Nvidia’s 12nm Turing architecture is still slightly more efficient than the 7nm Navi, considering the difference in lithography.

Power consumption using Furmark shows much more consistent power use across the test. In this case, The ASRock averaged 129W with the Asus and Sapphire cards both using 133W. The Gigabyte again comes in the lowest reaching 122W. The GTX 1650 Super barely budged from the game tests averaging 99W, which is less power than all of the RX 5500 XT cards we’ve tested. 

Temperatures, Fan Speeds and Clock Rates 

To see how each video card behaves with temperatures and fan speeds, like the power testing, we use GPU-Z logging in one-second intervals to capture data. These items are captured by looping the Metro Exodus benchmark five times, running at 2560×1440 and ultra settings.

Additionally, we also use FurMark to capture the data below, which offers a more consistent load and uses slightly more power, regardless of the fact that the clock speeds and voltages are limited. These data sets give insight into worst-case situations along with a non-gaming workload.

Gaming

Temperatures for the ASRock Phantom Gaming D averaged almost 62 degrees Celsius during gaming testing. This result places it in the middle with the Gigabyte Gaming OC. The Sapphire Pulse ran the warmest at 69 degrees Celsius, at least partly because its fan speeds are lower, while the much larger Asus ROG Strix RX 5500 XT O8G Gaming ran the coolest at 54 degrees Celsius. Though the Phantom Gaming D didn’t have the best cooling solution, it worked quietly and kept the video card running well within specification. 

Fan speeds during the Metro: Exodus test show all cards except for the Sapphire Pulse have significantly varying fan speeds. The ASRock varied throughout the test from around 1600 RPM to a peak of 2,000 RPM. The higher fan speeds were more noticeable over the slower spinning Asus and Gigabyte cards, but none were particularly loud or off-putting. During more typical gaming loads (where there isn’t a scene change every 100 seconds) users should not see this fan behavior.

Clock speeds on the ASRock Phantom Gaming D averaged 1,818 MHz during the last phase of the gaming test. This result is over 20 MHz faster than the Sapphire Pulse (1,794 MHz), 10 MHz faster than the Gigabyte and 2 MHz faster than the Aus. This result makes sense considering the 8GB cards’ clock speeds are similar out of the box. Another noteworthy fact is how much the 4GB of memory on the Sapphire card affects results with core clock speeds being similar to all the other tested cards. 

Furmark

Temperatures in Furmark ran a couple of degrees warmer than game testing across all tested cards. The ASRock Phantom Gaming D and Gigabyte Gaming OC both peaked at 67 degrees Celsius with the Asus again coming in the coolest running at 60 degrees Celsius. The slower Sapphire card peaked at 74 degrees Celsius with a similar size cooling solution as the Phantom Gaming D. 

Fan speeds during Furmark testing stabilized across all cards with the ASRock again peaking around 2,000 RPM. Unlike the Asus O8G Gaming, the ASRock Phantom Gaming D maintained these speeds throughout the test. While the ASRock video card doesn’t have the best cooling solution, it kept the card well within specification and did so relatively quietly. 

Clock rates during the Furmark testing averaged 1,661 MHz, which is the lowest value by far of all three RX 5500 XT 8GB cards tested. Compared to the game test, the result for the Phantom Gaming D is over 150 MHz less than game testing. 

Along with AMD’s software suite that’s included with the driver package, ASRock has its own monitoring and tweaking software, named ASRock Tweak. This lightweight application is able to overclock the core and memory speed, though it’s manual only—there’s no automatic scanner.

The software displays current core and memory speeds, GPU and memory use, along with temperatures and fan speeds. Unlike similar applications from other card partners, ASRock Tweak doesn’t include real-time hardware monitoring in chart form.

Overall, the software works fine for its intended purpose, but it’s not as feature-rich as some of the other solutions. More granular control over AMD video cards can be found within the driver software.

The ASRock RX 5500 XT Phantom Gaming D’s testing showed the card to be a competent 1080p ultra gamer across the majority of titles in our test suite. Although it did not have the large cooler and three fans some of the other cards did, the Phantom Gaming D kept the card running well within specification and did so rather quietly—not quite as quiet or as well as the much larger Asus card, but it was effective nonetheless. 

Priced at $199.99, the Phantom Gaming D is the least expensive card compared to the other 8GB AMD RX 5500 XT cards we’ve tested. The Gigabyte is priced at $219.99 and the Asus O8G $229.99. The 4GB Sapphire Pulse is listed at $179.99, though the 4GB VRAM makes it a less desirable choice. Between the 8GB cards, some titles may show one performing slightly better than the other, but it’s mostly just typical fluctuations and in the end, they all averaged out to perform the same. Where they set themselves apart is the cooling and other features. 

Opening up considerations to Nvidia GPUs, we know the 5500 XT 8GB cards are slightly faster than the less expensive GTX 1650 Super, and a few percent slower than the GTX 1660. The GTX 1660 Super also makes for an intriguing buy as well. Priced from $229.99, it’s about 15% faster than the GTX 1660 while being priced around the same. If you can stretch the budget to $230, it does offer a better price to performance ratio than any RX 5500 XT. 

Right now, if you want the most well-rounded RX 5500 XT 8GB card, it’s the ASRock Phantom Gaming D. While it doesn’t cool as well as the larger Asus card, it cools as good as the Gigabyte Gaming OC version while being smaller, and it costs less. Its two fans spin faster and create more noise than the Gigabyte and Asus cards, but it wasn’t intrusive. Outside of that, all three have some form of RGB lighting as well as backplates. The difference between their VRMs won’t affect the ambient overclocker and they’re all robust solutions.

Overall, the ASRock RX 5500 XT Phantom Gaming D is a good performing graphics card for both 1080p ultra and 1080p medium settings. As the least expensive 8GB 5500 XT, this card will give you the same performance as more expensive options and does so with a much smaller footprint. If you’re looking for a good 1080p ultra/medium video card around the $200 price point, the Phantom Gaming D is a good option.