Is AMD’s Radeon RX 5500 XT Hamstrung by VRAM and PCIe Bandwidth?

With the release of the budget RX 5500 XT, AMD started to fill out the bottom two-thirds of its product stack, bringing much-needed competition into that segment. The graphics card arrived with 4GB and 8GB variants ($169 and $199 respectively), and in our RX 5500 XT review, some titles–namely Forza Horizon 4, Battlefield V, Far Cry 5 and Shadow of the Tomb Raider–the 4GB variant performed considerably slower when running Ultra settings. The initial takeaway from this was the 4GB capacity is simply not enough for those titles, which is correct. However, there’s more to it.

German website pcgameshardware.de recently tested both RX 5500 XT versions using PCIe 3.0 and PCIe 4.0. The results were a bit surprising. They found that when the 4GB card’s VRAM buffer was full, running on a PCIe 4.0 slot improved performance by reducing the impact of using VRAM. The 8GB also displayed minor improvements from PCIe 3.0 to 4.0, but the 8GB capacity is what makes that card shine, since the extra memory means the card didn’t have to dump data off of the faster VRAM as often.

So why does this happen? Once the VRAM is filled up, the extra data is sent to the system RAM across the PCIe bus. The faster the bus, the faster the swapped data will get where it needs to go and not delay the next frame. The 5500 XT is wired as a PCIe 4.0 x8 card (physical tab is x16), which yields the same theoretical bandwidth as PCIe 3.0 x16. According to the article, AMD said that is sufficient for a GPU of this performance class. For all intents and purposes, this is correct. However, for games that eclipse the installed VRAM we see it can fall short.

What pcgameshardware.de showed was that during memory reads and writes, the transfer rate was effectively cut in half. Whereas PCIe 4.0 x16 would reach 12.5 GBps, x8 only reaches around 6.5-6.7 GBps — half the throughput. Memory copy speeds are not affected, as this is the transfer rate from the memory itself.

So what does this mean for performance? The improvements varied by title (and settings), but pcgameshardware.de tested Assassin’s Creed Odyssey, Battlefield V, Far Cry: New Dawn, Wolfenstein Youngblood and Shadow of the Tomb Raider. In each test, there were improvements in the 4GB card when it was running on a PCIe 4.0 system. Some were significant, others not so much. Assassin’s Creed and Far Cry appeared to benefit the most, along with Wolfenstein Youngblood. The 8GB variant also showed slight improvements across the board, but not nearly as much. 

In the end, it’s a curious choice for AMD to wire these cards in an x8 configuration, particularly with the 4GB card, which has a greater chance of running into VRAM limits. While many titles will not run into these issues, those that do can take a severe performance hit. 

With all of this information, I’m left wondering why AMD chose to configure the card in this manner? Cost perhaps? Increasing the performance of both cards can only help the optics, especially on the 4GB card, which is the most affected. This could have been spun as a reason to buy into the AMD Ryzen 3000 X570 ecosystem as a bright spot for PCIe 4.0. But instead, we are left wondering, why?

Intel discrete GPU struggles: won’t compete against NVIDIA or AMD

Raja Koduri could be the next one on Intel’s chopping block according to new rumors on DG1

Intel has been going through all sorts of issues over the last couple of years, with its troubled modem division selling to Apple for $1 billion, the constant world of hurt AMD is hitting them with in the consumer/HEDT/server CPU markets, and now it seems their discrete GPU is in trouble.

A fresh new rumor has floated online saying that Intel’s upcoming DG1 is in trouble, cementing in some of the whispers I’ve been hearing from my own industry sources. A new post on Reddit suggests that “DG1 is not shaping up very well at all”, and that the thermals and power is “looking to be dismal as Koduri is having a hard time reigning in DG1’s power envelope. Anyone that has followed his GPU designs know that this has been his downfall forever”.

A new graphics card with a 25W TDP isn’t going to get anywhere near even the lowest-end Radeon or GeForce graphics card from competitors to Intel in AMD and NVIDIA, and if DG1 is this bad at this long away from release — Koduri won’t look good at all. He was the man behind the failed launch of Radeon RX Vega under AMD and Radeon Technologies Group at the time, with the Reddit post continuing: “The real question now is should we expect to see Koduri still be working for Intel after Q3’20? If not, you can expect to never see DG2 make its way to a retail product”.

ADATA Reveals XPG Hunter SO-DIMMs: Up to DDR4-3000, Up to 32 GB

ADATA has introduced its new family of SO-DIMMs for laptops and small form-factor desktops. The XPG Hunter DDR4 SO-DIMMs offer data transfer rates of up to 3000 MT/s and capacities from 8 GB to 32 GB, allowing typical dual-channel (dual slot) systems to go up to 64 GB of RAM in total.

ADATA’s XPG Hunter DDR4 SO-DIMMs are based on cherry-picked memory chips as well as high-quality PCBs. Like other enthusiast-class memory modules, the XPG Hunter SO-DIMMs feature XMP 2.0 SPD profiles to make it easier to set the correct speed settings. Also, to maximize stability, the modules come equipped with heat spreaders.

The XPG Hunter SO-DIMMs are set to be available in 8 GB, 16 GB, and 32 GB sizes and will be rated for DDR4-2666 CL18 and DDR4-3000 CL17 at 1.2 V operation.

ADATA did not announce MSRPs for its XPG Hunter DDR4 memory modules, but considering the fact that we are not dealing with SO-DIMMs designed for extreme PCs, it is unlikely that the new modules will be particularly spendy.

AMD RADEON RX 5500 XT 8GB REVIEW

AMD’s retail Radeon RX 5500 XT series of graphics cards are here, bringing additional options to the budget and mid-range graphics card market. I looked at the Radeon RX 5500 XT 4GB last week for the launch, and now it’s time for a review of the 8GB card. Just how much does doubling down on VRAM do for performance? As you’d expect, it depends on the games and settings you’re running.

If you haven’t read the 4GB model review, I suggest you start there. AMD’s Navi / RDNA architecture hasn’t changed, though Navi 14 does cut the potential number of compute units (CUs) from a maximum of 40 down to a maximum of 24—with the 5500 XT enabling 22 of the CUs. The two cards I’m reviewing look identical, which is expected as both are Sapphire Pulse models. The only difference is the VRAM and a $30 increase in price.

While the specs might not have changed, other than VRAM, the price increase has some ramifications. The 5500 XT 4GB card matches up against the GTX 1650 Super, and the two are effectively tied in performance. The 8GB card meanwhile has to take on the GTX 1660, and Nvidia’s card has the same number of GPU cores, 1,408. The problem for AMD is that Nvidia’s GPU cores tend to perform a bit better overall, so the GTX 1660 is likely to come out ahead in performance. At least the TDP is basically the same—120W vs 130W won’t really matter either way.

There’s little difference in features to speak of. Technically, Nvidia’s GTX 1660 can enable ray tracing in games that support it. In practice, the 1660 isn’t fast enough to make ray tracing worthwhile. There are other facets of performance—AMD GPUs often perform better in DirectX 12 games, Nvidia GPUs can do better in DX11 and other games—but overall the two sides are evenly matched in the mid-range and lower markets.

There’s also a jump in generational pricing, especially looking at current street prices. The RX 590 originally launched at $279 last year; today, you can get that same card for $179. It uses more power, but performance is going to be very similar to the RX 5500 XT 8GB. Even AMD’s own numbers say as much, where AMD compares the 5500 XT to the older RX 480 cards and claims a 30 percent improvement in performance. AMD’s not wrong, but the RX 470/480 are three years old, so a 30 percent improvement at only slightly lower prices isn’t all that impressive.

Which brings me to the real question: How does the RX 5500 XT 8GB perform? Let’s get to the benchmarks.

Radeon RX 5500 XT 8GB gaming performance

My standard testbed features an overclocked Core i7-8700K running at 5.0GHz. For budget and midrange cards, the CPU might be overkill, but it shows the highest potential performance for the graphics cards. Anyone considering the RX 5500 XT will probably be running a slightly slower CPU, but anything from the past several years should be sufficient. I’ve tested 11 games for this review, with a reasonable split between games that favor AMD hardware, and games that run better on Nvidia hardware.

Testing is done at 1080p ‘medium’ and ‘ultra’ settings (which may go by different names, depending on the game), as well as 1440p and ‘ultra’ settings. Each game is tested multiple times, using the median result, to ensure consistency of performance. I’ve included a few more GPUs this time, mostly for reference—if you’re thinking about upgrading from a Vega 56, for example, the 5500 XT isn’t going to be very compelling. It looks better against the old GTX 970 and R9 390, however.

At 1080p medium, there’s no real benefit to the 8GB RX 5500 XT compared to the 4GB model. A few games even perform slightly faster with the 4GB card, which is a bit odd—it’s probably just slight differences in memory latency. The RX 5500 XT cards are also tied with the GTX 1650 Super in overall performance, but the GTX 1660 is about 5 percent faster. Running medium quality at 1080p isn’t really the goal if you’re going to pay extra for the 8GB card, however.

Stepping up to 1080p ultra starts to favor the 8GB model over its lesser sibling, though it’s only a seven percent improvement on average. Individual games show much larger gains, though—Shadow of the Tomb Raider performance improves by 9 percent, Forza Horizon 4 runs 15 percent faster, Borderlands 3 is 18 percent faster, and Assassin’s Creed Odyssey opens up a 33 percent gap. The remaining seven games all show relatively similar performance, but there have been quite a few recent games that I’m not showing here where having more VRAM would also be beneficial (eg, Red Dead Redemption 2).

The difficulty AMD faces is that at 1080p—which is really the resolution you should plan on using with a mid-range or budget GPU—even at maxed out quality there aren’t a ton of games that truly need more VRAM. And of the games that benefit from more VRAM, none of them seem to need more than 6GB. That means once again that the GTX 1660 costs as much as the 5500 XT 8GB but performs about five percent better.

At 1440p ultra, the 5500 XT 8GB shows its largest lead yet relative to the 4GB cards. It’s now 12 percent faster, but we’re also looking at average performance across the test suite of 45 fps. Every game still stays above 30 fps, which means performance is still generally better than current consoles, but only two of the tested games (Strange Brigade and Forza Horizon 4) average 60 fps or more. Lighter esports games like CS:GO and Overwatch should be viable at 1440p, however.

I’m not showing 4K charts, because framerates are about half what you get at 1440p, but if you’re wondering: the 8GB card ends up with a 25 percent average performance lead over the 4GB model. It also comes out tied with the 1660 at 4K ultra, though both sit at 25 fps. Long-term, the 6GB and 8GB cards are a safer choice than a 4GB card, and I wouldn’t touch a 2GB model at this point, but if you want more performance, you’ll still be better off getting a faster GPU rather than doubling VRAM.

AMD Radeon XT 5500 XT 8GB: more and less

As a general rule, I advise people not to skimp on VRAM. I’ve been saying that since the GTX 1060 3GB/6GB and RX 570/580 4GB/8GB launches several years back. Yes, it costs more for the higher VRAM models, and performance doesn’t always improve a lot with the extra memory. Except when it does, and then you could end up very sad that you cheaped out on your graphics card and now have to turn down some settings. The RX 5500 XT 8GB is a continuation of that recommendation. You can save $30 by purchasing the 4GB model, but long-term I don’t think that’s the best plan.

That’s the “more” part of the equation, but it’s not a clear win and there’s a reason I’ve scored this card slightly lower than the 4GB model. First, even across 11 games, the improvement in performance at 1080p ultra is mostly minor (with a few exceptions). In an AMD-only world, I still recommend buying the 8GB model, but AMD isn’t alone, there are multiple other options. Nvidia’s GTX 1660 costs the same as the 5500 XT 8GB, or you can spend another $30 to get a GTX 1660 Super.

Alternatively, look at more than just the price of the graphics card and its performance. Spending 15 percent more ($230) for a 20 percent increase in framerates (ie, GTX 1660 Super) is reasonable. However, if you’re putting that card into a gaming PC that costs $500 for the other components, it’s really $730 vs. $700—a mere 4 percent increase in total cost, for the same 20 percent improvement in performance. That’s basically what the GTX 1660 Super offers. Or you could upgrade to an RTX 2060 and it would be $800 vs $700, a 14 percent increase in PC costs for a 45 percent jump in performance. Spending a bit more money for each higher tier of graphics performance is a slippery slope, but even if you stop at the $200 mark, the RX 5500 XT 8GB doesn’t come out as the champion.

Ultimately, AMD’s Radeon RX 5500 XT series is a reasonable offering in the budget to midrange graphics card market. It’s not clearly superior to other options, but it’s also not clearly worse. If you’re looking to buy a new graphics card, whether as an upgrade to an existing PC or as part of a new gaming PC build, it’s worth a look. Performance of the 5500 XT 8GB is basically equal to the outgoing RX 590, while using about 100W less power. You wouldn’t want to “upgrade” from a 590 to a 5500 XT, in other words, but if you’re trying to choose between those two AMD options, I’d grab the newer model. If you’re not set on AMD, however, I’d look to the GTX 1660 or 1660 Super.

Qualcomm Snapdragon 865 benchmarks aren’t exactly crushing the iPhone’s A13 chipset

It was just recently that Qualcomm unveiled their next flagship chipset, the Snapdragon 865. This is meant to be the company’s most powerful chipset to date where we’re hearing a lot about what it could be doing for Android phones come 2020, but recent benchmarks suggests that it might still lag a teensy bit behind Apple’s 2019 A13 Bionic chipset.

This is according to a recent Geekbench 5 test that the folks at HotHardware ran the chipset on where based on the results, it seems that the Snapdragon 865 is trailing behind ever so slightly the A13 Bionic. The difference is marginal and honestly, we doubt that in day-to-day use, it would be noticeable, if at all.

Now, before our readers get up in arms over this comparison, we should note that Geekbench only tests the CPU cores of the chipset. Also, HotHardware got an early preview of the chipset and it is possible that with additional tweaks and improvements, the final version could very well pull ahead of the A13 Bionic.

It’s not all bad though as it does show that the Snapdragon 865 does perform much better compared to its predecessor. Also, further testing on AnTuTu reveals huge gains over the Snapdragon 855, where Android Authority found that Snapdragon 865 scored a whopping 540,000. This is versus the Snapdragon 855 which score 420,000. It also beat out the A13 bionic which scored 506,000.

That being said, benchmarks don’t always tell the whole story as there are many factors that are taken into consideration into what makes a phone good, so this is just one aspect of it. Hopefully once we start seeing more Snapdragon 865 phones in 2020, we’ll be able to get a better idea of how well it really handles.

AMD Ryzen 9 3900X Overtakes Higher-Clocked Core i9-7920X for wPrime World Record

As spotted by Shaun Fosmark, Australian overclocker jordan.hyde99 has set a new world record in wPrime 1024M with AMD’s Ryzen 9 3900X processor. What’s fascinating is the fact that the AMD processor was running a much slower clock speed than the previous record holder, the Intel Core i9-7920X.

The Ryzen 9 3900X and Core i9-7920X are both equipped with 12 cores and 24 threads, so both chips are on even ground in terms of core counts. The first is based on AMD’s Zen 2 microarchitecture and rocks a 3.8 GHz base clock and 4.6 GHz boost clock, while the latter utilizes Intel’s Skylake microarchitecture and clocks in with a 2.9 GHz and 4.4 GHz base and boost clock, respectively.

Overclocker jordan.hyde99 overclocked his Ryzen 9 3900X to 5,625 MHz on liquid nitrogen and finished the wPrime 1024M benchmark run in 35 seconds and 517 milliseconds to take the crown away from the Core i9-7920X. 

In comparison, the Core i9-7920X previously held the record at 35 seconds 693 milliseconds while being overclocked to 5,955 MHz with the same type of exotic cooling.

Although the Ryzen 9 3900X only managed to beat the Core i9-7920X by a negligible margin (less than one percent), the big takeaway here is the clock speed. The Ryzen 9 3900X was able to match the Core i9-7920X’s score while running with a 5.8% lower operating clock. This stands as testimony that Zen 2’s instructions per cycle (IPC) is higher than Skylake, and subsequent microarchitectures based on the same design, in many workloads.

Overclock Your Radeon RX 5500 XT To 2.1 GHz On Air With This Tool

Igor’s Lab community member that goes by the name of ‘hellm’ has updated his MorePowerTool (MPT) AMD graphics card overclocking tool to support the recently-launched Radeon RX 5500 XT.

The Radeon RX 5500 XT comes with a 1,607 MHz base clock, 1,717 MHz game clock and 1,845 MHz boost clock. However, AMD’s partners are free to offer overclocked custom models at their discretion. If you’re still not happy with the performance, AMD’s latest Radeon Software Adrenalin 2020 Edition offers a nifty little tool to overclock the graphics card even more. The MorePowerTool takes it a step further.

First of all, it’s important to highlight the fact that the MorePowerTool is an unofficial overclocking software not sanctioned by AMD. This means that you basically use it at your own risk. 

The MorePowerTool modifies the SoftPowerPlayTables and allow the graphics card to draw more power. By doing so, you’re basically bypassing the graphics card’s TDP (thermal design power) limit, which should improve overclocking headroom.

There is always a risk of damaging your graphics card when you run it outside of the manufacturer’s specifications. Therefore, the settings are not suitable for 24/7 operation. Moreover, the MorePowerTool is designed specifically for experienced enthusiasts and overclockers that know what they’re doing.

The MorePowerTool software reap huge rewards if you’re adventurous enough to attempt it, though. Our colleague Igor has informed us that the MorePowerTool has allowed the Radeon RX 5500 XT to hit speeds up to 2.1 GHz on air cooling. The graphics card has been tested at 2.2 GHz as well. Furthermore, Igor believes that every Radeon RX 5500 XT should have no problem reaching the 2 GHz mark.

Samsung is working on a 5 nm Exynos 1000 SoC

With the introduction of the Exynos 990 SoC that is supposed to power the upcoming Galaxy S11 lineup, Samsung completely ditched its focus on custom high-performance processor cores. The Mongoose cores in the Exynos 9820 that should have provided the edge over ARM’s Cortex-A76 cores featured in the Snapdragon 855 did not really cut it, so Samsung decided to shut down the custom core development facility in Austin. As always. Samsung will be offering both Exynos 990 and Snapdragon 865 versions for its 2020 flagships, even though the Exynos SoCs will most likely prove a bit slower. But not all is lost, as Samsung is rumored to already be working on a next gen Exynos 1000 SoC that should at least even the playing field.

The rumor comes straight from reputable leaker Ice Universe, who recently reported that Samsung could be using ARM’s Cortex-A78 as high-performance cores for its next gen Exynos 1000 SoC. Qualcomm will most likely use the same cores for whatever comes after the Snapdragon 865, so it all depends on how well each of the companies integrates these cores with other components like the GPU. Speaking of GPUs, Ice Universe reports that Samsung has not yet decided on the exact model to be integrated in the Exynos 1000. Since it is sticking with ARM’s technology, it might opt for the Mali-G78 GPU, but chances are that Samsung will finally include an RDNA mobile GPU designed in collaboration with AMD.

Last, but not least, Ice Universe mentions that the Exynos 1000 should be produced using the 5 nm process. Samsung has its own fabs, but it is a bit behind TSMC with the availability of the 5 nm nodes, so the Exynos 1000 will most likely be released towards the end of 2020.

KIRIN 1020 CHIP CODENAMED BALTIMORE,COMING WITH 5NM PROCESS

At present, news about Huawei’s next-generation flagship processor Kirin 1020 is gradually increasing. However, the processor will not be released until next fall.

The news comes from Weibo. One of the bloggers’ channel simply said ‘5nm Baltimore, ready to verify!’ In the past, the internal codes of Hisilicon chips were usually named after foreign cities. Plus, there only two customers of TSMC’s 5nm process at present, Apple and Hisilicon. So there is every reason to think here is a hint that Kirin’s next-generation flagship processor is ready to enter the tape-out verification stage.

It is reported that the current processor chip tape-out is divided into pre-tape verification and post-tape verification. But in September some sources said that the 5nm Hisilicon processor has officially tapped out. So it should be done to prepare the development board to verify whether the function meets the design expectations after the tape is finished, and then enter the engineering machine test phase.

It is worth mentioning that there is news that the Kirin 1020 directly skips A77 to upgrade to the A78 framework. But it seems too early to discuss the framework. Perhaps, at least, until the next summer, there will be more accurate information.

According to the convention, Huawei’s new Kirin flagship processors will be launched every autumn. It should be no surprise to see the Huawei Mate 40 series debuting with this processor.

Regardless of the launch date, what we are most interested in is its increase of power, which according to the last rumors, would be 50% over that of the Kirin 990, a considerably large increase. To give you an idea, the increase in power of Snapdragon 865 compared to 855 is less than 25%.

Overclocker Gets 1TB of RAM Running on X299 Motherboard Limited to 256GB

ASRock announced this week that renown overclocker Nick Shih has filled ASRock’s X299 Taichi CLX workstation-grade motherboard with 1TB of memory when the motherboard only officially supports up to 256GB.

To achieve this feat, Shih paired an Intel Core i9-7900X with eight SK Hynix 128GB LRDIMM (load-reduced DIMM) modules. This is quite fascinating, considering we were told that the X299 chipset doesn’t support EEC (error-correcting code) memory, much less LRDIMMs. It’s unknown what kind of magic Shih had to use to get the aforementioned memory to work on the ASRock X299 Taichi CLX.

SK Hynix already has a couple of 256GB LRDIMMs in its portfolio, but they max out at 2,666 MHz. The ones that Shih used seem to be unreleased. According to the screenshots shared, the LRDIMMs carry the HMABAGL7MBR4N-WM part number and come clocked at 2,933 MHz with CL21-21-21-47 timings and a 1.20V operating voltage. 

What makes Shih’s accomplishment even more spectacular is that fact he was able to push the LRDIMMs to 3,471.8 MHz with the memory timings configured to CL20-24-24-56. That Core i9-7900X must have one heck of an IMC (integrated memory controller) to support up 1TB of memory operating at those speeds.

Samsung revealed its 256GB LRDIMMs last year. It would certainly be mind-blowing to see 2TB of memory running on a consumer-grade motherboard. The tools are there ready for anyone to use. Now, we just need a daring overclocker to attempt it.