AMD Ryzen 5 5600X ends Intel’s hegemony in single-thread perf; 10% faster than Core i9-10900K and 23% faster than Ryzen 9 3900X

Intel has traditionally held the single-core performance crown for many generations. AMD did get close to Intel 9th gen Coffee Lake, but Comet Lake once again showed good leads in single-core and gaming. With Ryzen 5000 Zen 3, Intel may not be in a comfortable position any longer.

User /u/losh11 (@loshan1212) first posted CPU-Z results of the Ryzen 5 5600X on Reddit. The OP had since deleted the post, but the results were shared quickly to other subreddits. CPU-Z screenshots and product packaging of the Ryzen 5 5600X were subsequently reposted by @GawroskiT on Twitter.

In the CPU-Z single-thread benchmark, the Ryzen 5 5600X scored 643 points leading the Core i9-10900K by 10% and the Core i7-10700K by 15%. The Zen 3 processor also seems to be 23.4% faster than a Ryzen 9 3900X in single-core, which shows that AMD is leaving no stone unturned in the quest to extract the maximum possible peak performance from the new architecture.

The Ryzen 5 5600X seems to be no slouch in the multi-core either with the processor amassing 4,814 points — 12% faster than the Core i5-10600KF and 14% faster than the Ryzen 5 3600XT.

This is not the first demonstration of AMD’s new found single-core leads, though. Previously, we saw the AMD Ryzen 9 5950X and the Ryzen 5 5600X posting impressive single-core results beating Intel’s flagship Core i9-10900K by a significant margin.

While it is always advisable to wait for final reviews before coming to conclusions, these initial scores do indicate that AMD may have a definite winner on their hands with Zen 3 that does not compromise either in single-core or multi-core workloads.

Nvidia RTX 3070 review: AMD’s stopwatch just started ticking a lot louder

Talking about the RTX 3070, Nvidia’s latest $499 GPU launching Thursday, October 29, is tricky in terms of the timing of today’s review embargo. As of right now, the RTX 3070 is the finest GPU in this price sector by a large margin. In 24 hours, that could change—perhaps drastically.

Ahead of AMD’s big October 28 event, dedicated to its RDNA 2 GPU line, Nvidia gave us an RTX 3070 Founders Edition to test however we saw fit. This is the GPU Nvidia absolutely needed to reveal before AMD shows up in (expectedly) the same price and power range.

Inside of an Nvidia-only bubble, this new GPU is a sensation. Pretty much every major RTX 2000-series card overshot with proprietary promises instead of offering brute force worth its inflated costs. Yet without AMD nipping at its heels, Nvidia’s annoying strategy seemed to be the right call: the company established the RTX series’ exclusive bonus processing cores as a major industry option without opposition, then got to wait a full year before competing with significant power jumps and delectable price cuts.

Last month’s RTX 3080 saw that strategy bear incredible fruit—even if ordering that $699 GPU is still seemingly impossible. But what happens when Nvidia scales down the Ampere 7nm promise to a $499 product that more people can afford? And how will that compare to whatever AMD likely has to offer in the same range?

Future-proofing around the 1440p threshold

We can only answer some of those questions today. (Until Nvidia proves otherwise, we assume that availability will continue to be a massive asterisk for this and all other RTX 3000-series cards.) In good news, at least, the RTX 3070 gets off to a roaring start by rendering its 2019 sibling, the RTX 2070 Super, moot. Both debuted at $499, but the newer option typically approaches, and occasionally bests, the RTX 2080 Ti (whose $1,199 MSRP in 2018 sure feels like a kick in the ray-traced teeth nowadays).

But RTX 3070’s price-to-performance ratio comes with one significant caveat: a not-so-future-proofed VRAM capacity of 8GB, shipping in the not-as-blistering category of GDDR6. That matches the best RTX 2000-series cards but is surpassed by higher-speed GDDR6x VRAM in pricier RTX 3000-series GPUs.

The thing is, “future-proofed” for PC gaming is relative. What’s going to matter in 3D processing in the near future, both for the games you love and the systems you run them on? If you’re set on having the crispest native 4K rendering for the foreseeable future, the RTX doesn’t leapfrog over the 2080 Ti, particularly with a VRAM allotment that could stress any games that ship with 4K-specific texture packs.

But if you’re favoring a lower-resolution panel, perhaps 1440p or a widescreen 1440p variant—and Steam’s worldwide stats make that a safe assumption—then your version of future-proofing revolves more around processing power and ray-tracing potential. In those respects, the RTX 3070 currently looks like the tippy-top option for a “top-of-the-line” 1440p system… with the bonus of Nvidia’s Deep Learning Super-Sampling (DLSS) for surprisingly competitive fidelity in 4K resolutions, should gamers upgrade their monitor between now and the next GPU generation. (Until AMD shows us otherwise, Nvidia’s proprietary DLSS 2.0 pipeline remains the industry’s leading upscaling option, and game studios have started embracing it in droves.)

In other words, if you’re more interested in high frame rates on resolutions less than 4K, and you want GPU overkill for such a CPU-bound gaming scenario, the RTX 3070 is this year’s best breathing-room option for the price… at least, unless AMD announces an even more compelling proposition on October 28.

Strong, but not the 2080 Ti topper we expected

The above collection of game benchmarks mostly mirrors the ones I used for my RTX 3080 review, and once again, these tests err on the side of graphical overkill. You may have zero interest in using an RTX 3070 with 4K resolutions or maximum graphical slider values, and that’s understandable. Instead, these tests are designed to stress the GPU as much as possible to present the clearest comparisons between the listed cards. Look less at the FPS values and more at the relative percentages of difference. (The exception comes from “DLSS” tests, which I’ll get to.)

Even though this year’s $499 RTX 3070 clearly exceeds the power of last year’s $699 RTX 2080 Super, I tested it against last year’s $499 RTX 2070 Super as well to show exactly what a difference a year makes in terms of price-to-power proposition. The percentage differential between the 70-suffix GPUs varies based on what kind of software you’re testing, but the most massive surge in performance can be found when ray-tracing effects are toggled at pure 4K resolution. Wolfenstein Youngblood, in particular, sees the 3070 double the 2070 Super’s frame rates in its ray-tracing benchmarks.

While Nvidia has made benchmarking claims that put the RTX 3070 ahead of the RTX 2080 Ti, that doesn’t necessarily bear out in my testing—but this is because the RTX 2080 Ti Founders Edition shipped in 2018 with a remarkable capacity for safe overclocking. The 3070 FE, like its 2070 Super sibling, seriously lacks headroom for such safe overclocking for either its core or memory clocks, as managed by tests-at-every-step automation by programs such as EVGA X1. Testing was nearly identical on the 3070 with or without a scant EVGA X1 overclock applied, and as such, I’ve left its OC tests out of this roundup. Remember: as Nvidia’s Founders Editions go, generally, so do other vendors’ variants. So we’re not sure other vendors will squeeze much more out of the same model.

Thus, the 2080 Ti still pulls ahead in most, but not all, of the above gaming benchmarks, whether ray tracing is or isn’t enabled. When comparing both cards’ specs, this difference checks out, since the newer 3070 cuts back on certain components for efficiency’s sake (not to mention that dip in VRAM capacity). Categories like Tensor cores and RT cores are listed as “newer-generation” versions for the 3070, and the bigger 3000-series cards beat the 2080 Ti both in quantity and generation, so they get the clearer wins. The 3070 finally sees that efficiency trade fail to win out in certain testing scenarios—nothing tragic, mind you, but worth noting in case you’d hoped for across-the-board wins against the 2080 Ti. That’s 184 “third-generation” Tensor cores in the 3070, versus 544 older Tensor cores in the 2080 Ti, and 46 “second-generation” RT cores in the 3070, versus 68 older RT cores in the 2080 Ti.

Size, ports, noise

The RTX 3070’s efficiency figures into its size reduction, down to 9.5 inches in length (242mm) from the RTX 2070 Super’s 10.5 inches (but not quite as small as the original RTX 2070’s 9-inch length). Like other 3000-series FEs, the RTX 3070 utilizes what Nvidia calls a “flow-through” design that pulls cool air from below and pushes hot air out in two directions: through its “blower,” out the same side as its DisplayPort and HDMI connections, and upward in the same direction as your motherboard’s other components. Basically, the size reduction may help you cram an RTX 3070 into a smaller case, but you’ll still want to guarantee considerable airflow.

Speaking of connections, they’re identical to what you’ll find on the RTX 3080: three for DisplayPort, one for HDMI 2.1. (If you missed it, Nvidia quietly dumped the VR-friendly USB Type-C “VirtualLink” port found in most RTX 2000-series cards from this year’s GPU generation, perhaps owing to how few VR headset manufacturers bothered supporting it.) Additionally, the 3070 continues the RTX 3000-series trend of employing a smaller 12-pin connector for power, though it ships with an adapter for today’s common 8-pin PSU standard. In the 3070’s case, it only requires one 8-pin connection to a PSU, not two (or a mix of 8-pin and 6-pin), even though it maxes out at a 220W power draw. (The 2070 Super requires one 8-pin and one 6-pin connector with a power-draw maximum of 215W.)

And when Nvidia brags that the RTX 3070 runs quieter, the company means it. While I lack solid decibel-measuring equipment to tell you exactly how much quieter this card runs than its competition, it’s safe to say that its full-load mix of fan noise and operational hum probably won’t be the loudest component in your system. And with my ear directly up to it, its noticeable noise certainly wasn’t louder than, say, a PlayStation 4 Pro. (Nvidia has described its noise level as “up to 16dBA quieter” than the original RTX 2070 Founders Edition.)

Thoughts on 1440p, ray tracing, and DLSS

The above benchmarks make clear that 4K/60fps performance in newer PC games, with all settings maxed out, isn’t a given on the RTX 3070. But it’s important to note that many of these tests include overkill settings for things like anti-aliasing, shadow resolution, and even “maximum” ray-tracing effects, all meant to guarantee maximum GPU impact for the sake of accurate comparisons between the GPUs. In the real world, you can safely drop most of these from “ultra,” “extreme,” or “insane” while still exceeding most console ports’ settings and barely looking discernible from their over-the-top maximums, and the results often land darned close to 4K/60.

Scale down to a resolution like 1440p and you’ll hope for frame rates that take advantage of monitors rated for 144fps and above. One good indicator of the RTX 3070’s capabilities is Borderlands 3, a particularly demanding (and arguably inefficient) game that doesn’t leverage Nvidia-specific GPU perks while packing its scenes with dynamic lighting, alpha particle effects, cel-shaded detail, and massive draw distances. When put through its benchmark wringer at 1440p on my testing rig (i7-8700K OC’ed to 4.7GHz, 32GB DDR-3000 RAM), BL3 averages 99.5fps at the “high” settings preset or 88.0fps at “ultra.” Not 144fps, mind you, but I think of BL3 as a good “floor” for performance, easily outdone by older and more efficient 3D games.

Without ray tracing turned on in 3D games from the past few years, RTX 3070’s frame rates have easily surpassed 80fps with tons of bells and whistles enabled at 1440p resolution, and they’ve easily gone higher with every drop in settings from there. But what happens on the RTX 3070 with ray tracing turned on?

As of press time, there’s an interesting combined trend for just about everything I’ve tested with some version of DirectX Ray Tracing (DXR): the harmonious pairing of Nvidia’s latest “DLSS 2.0” standard. Should you run GPU-pounders like last year’s Control or this month’s Watch Dogs Legion at near-max settings and 1440p resolution, plus ray tracing enabled, you can expect frame rates at roughly 50 to 55fps on the RTX 3070. But a funny thing has happened with DLSS 2.0: much improved support for DLSS upscaling from 906p to 1440p. Last year, I would’ve told you that you were crazy to upscale from anything lower than 1440p, in terms of pixel smudginess.

When testing at 1440p, Control has seen its DLSS 2.0 translation of tiny details, particularly text on posters, improve compared to native rendering plus temporal anti-aliasing (TAA). Meanwhile, WDL’s benchmark is keen on adding rain to its mix, which is clever on Ubisoft’s part; this is the exact kind of detail that DLSS has struggled to render in games like Death Stranding, yet in this newer game, rain materializes almost identically when its 906p signal is upscaled with DLSS’ machine-learning wizardry.

With both of these games’ DLSS modes toggled at this 906p upscale, frame rates jump to the 78-84fps range… and that’s with ray tracing enabled (“high” RT settings in Control, “medium” RT settings in WDL).

A masterful game of GPU dominoes

Nvidia really couldn’t have set these dominoes up any better. Its RTX line of GPUs has separate components to handle the above fancy features—dedicated ray-tracing cores and dedicated “tensor” cores to handle ML-assisted computation. The way its ray-tracing cores work lines up neatly with industrywide standards like DXR, which means it’s a drop in the programming budget to implement those in ways that will work on competitors’ GPUs and on brand-new gaming consoles. And the tensor cores’ upscaling methods line up neatly with TAA, a particularly common anti-aliasing standard that Nvidia’s DLSS effectively piggybacks off. As of DLSS 2.0, the model does not require game-specific coding to work (though developers still have to partner with Nvidia to implement it). For Nvidia gamers, then, the ray-tracing proposition going forward is clear: if you want to turn it on, you’ll almost certainly have the simultaneous option of toggling the efficiency of Nvidia’s dedicated RT cores and the efficiency of their DLSS implementation. In terms of pixel fidelity, DLSS 2.0 has pretty much proven to be a wash, with games generally enjoying a mix of sharper and blurrier elements depending on the scene (with neither being egregious, with the notable exception of Death Stranding’s peskiest, super-detailed moments like cut-scene details and screen-filling rain). And that’s a wash visually, not computationally; the proof is in the frame-rate pudding.

We still don’t know if AMD can possibly compete when its future cards have their ray-tracing modes turned on. Maybe we’re in for a $500-ish scenario where AMD can beat Nvidia’s rendering performance in a game like Borderlands 3 at a better price-to-performance ratio, only to lose out on the same performance gains with ray tracing turned on. Having tested Watch Dogs Legion over the past week, I can safely say its RT perks—as slathered over a massive, open-world city full of reflective surfaces and other handsome light-bounce effects—are difficult to disable now that I have a midrange GPU that can reasonably handle said effects at “1440p.”

Meaning, I could turn them off… but I no longer want to. It’s hard to go back to plain ol’ rasterization after seeing so many light sources realistically emerge no matter what time of day or scenario I’m in. As I pilot a drone past a shiny office building, or drive in a shiny, future-London car past a beautiful landmark, I see objects in WDL reflect or bounce light in ways that acknowledge objects or light sources that otherwise aren’t on the screen. This is what ray tracing does: it accounts for every nearby light bounce, even if it’s not on screen, to render the whole world whether you can see it directly or not.

Plus, if you have dreams of one day toggling ray-tracing power at 4K with this card, WDL on an RTX 3070 at “high” settings gets up to a 58fps average in 4K resolution with RT at “medium,” so long as I use DLSS to upscale to 4K from… wait for it… 1440p native. Those upscaling results are convincingly crisp, as well.

Thus, as I said in the beginning, your definition of a “future-proofed” GPU will likely drive your interest in what the RTX 3070 has to offer for $499. We’re about to see even more interesting ray tracing in games—including at least one we’re not allowed to talk about yet. You’ll have to take our word for it, in terms of how exciting it is to live inside of some games’ ray-traced worlds.

If that’s not your bag, due to visual preferences or budgetary reasons, I get it. But it remains to be seen whether a cheaper RTX card can deliver the same future-proofing in the 1080p range or whether AMD will arrive with a perfect amount of budget-minded power and ray tracing—or even a butt-kicker of a card that skips ray tracing altogether in favor of powerful, traditional 3D rendering for a damned good price. For now, in the 1440p range, Nvidia has the clear lead… for at least 24 hour

AMD Ryzen 9 5950X & Ryzen 9 5900X “Zen 3” CPU Benchmarks Leak Out, 16 Cores At 5 GHz & Huge Single-Threaded Performance Jump

The latest benchmarks of AMD’s upcoming Ryzen 9 5950X and Ryzen 9 5900X “Zen 3” CPUs have leaked out in the Geekbench database (via TUM_APISAK). The Ryzen 9 series processors will feature the highest core count available in the Ryzen 5000 series lineup and will be aimed at the enthusiast gaming market.

AMD Ryzen 9 5950X 16 Core & Ryzen 9 5900X 12 Core CPU Benchmarks Leak Out, Huge Boost In Single-Core Performance

The benchmarks for the AMD Ryzen 9 5950X and Ryzen 9 5900X show both chips running at close to 5 GHz frequencies which is pretty impressive for such high core count chips. Both Ryzen 9 chips were running on different platforms. The Ryzen 9 5950X was featured on an ASUS ROG STRIX X570I Gaming motherboard with 16 GB (DDR4-3866) memory while the Ryzen 9 5900X was featured on MSI’s X570 GODLIKE motherboard with 16 GB (DDR4-3600) memory.

In Geekbench 5, the AMD Ryzen 9 5950X scored 1575 points in the single-threaded &13,605 points in the multi-threaded tests. The chip was running at clock speeds close to the 5 GHz mark with the maximum reported clock speed hitting 4.983 GHz in the multi-threaded test while the CPU hit 5.01 GHz in the single-core test.

The AMD Ryzen 9 5900X scored 1605 points in the single-threaded &12,869 points in the multi-threaded tests. Once again, the chip was boosting beyond its rated boost clocks with the highest core clock reported at 4.947 GHz in the multi-threaded and 4.95 GHz in the single-threaded test.

When it comes to performance comparison, the AMD Ryzen 9 5950X is around 20% faster than the AMD Ryzen 9 3950X in the single-threaded and around 10% faster in multi-threaded tests. AMD’s Ryzen 9 5900X is also around 20-25% faster in single-core but only around 6-8% faster on average in multi-core tests compared to its predecessor, the Ryzen 9 3900X.

Compared to Intel’s flagship Core i9-10900K CPU, the AMD Ryzen 9 5900X offers a 15-20% single-core performance bump and around a 15% performance bump in multi-threaded tests.

AMD Ryzen 9 5950X “Zen 3” Desktop CPU – 16 Cores / 32 Threads Up To 4.9 GHz For $799 US

Starting off with the top of the line parts, AMD is announcing the Ryzen 9 series which is made up of the Ryzen 9 5950X. The Ryzen 9 5950X is the flagship 16 core part which is aimed at users demanding heavy multi-threading performance on AM4 sockets.

The chip has a total cache of 72 MB and a TDP of 105W. The chip features a boost clock of up to 4.9 GHz boost which when put together is just stunning for a 16 core part. The AMD Ryzen 9 5950X is going to cost $799 US and will be available on the 5th of November like the rest of the lineup.

AMD Ryzen 9 5900X “Zen 3” Desktop CPU – 12 Cores / 24 Threads Up To 4.8 GHz For $549 US

Next up, we have the Ryzen 9 5900X which is a 12 core part that is aimed at users demanding heavy multi-threading performance on AM4 sockets. The chip has a total cache of 70 MB and a TDP of 105W. The chip features a base clock of 3.7 GHz and a boost clock of up to 4.8 GHz boost which is faster than the Ryzen 9 3900XT. The AMD Ryzen 9 5900X will feature a retail price of $549 US which is $50 US higher than the MSRP of the Ryzen 9 3900XT while delivering significantly faster performance.

In performance comparisons, the AMD Ryzen 9 5900X completely obliterates the Intel Core i9-10900K, delivering insane amounts of up to 15% single-threaded performance jump over the competing chip within Cinebench R20. The same goes for gaming performance where the AMD Ryzen 9 5900X delivers up to 21% performance gains over the Core i9-10900K.

The AMD Ryzen 5000 Desktop CPUs, codenamed Vermeer, will be launching on the 5th of November. The launch lineup will include the 16 core Ryzen 9 5950X, the 12 core Ryzen 9 5900X, the 8 core Ryzen 7 5800X, and the 6 core Ryzen 5 5600X. You can learn more about these SKUs here.

All 500 series motherboards (X570/B550) will be receiving BIOS updates to support the next-gen lineup so if you’re planning to get a Ryzen 5000 CPU at launch, you better head over to this link and grab one that’s supported for your motherboard (do note that 400-series support comes later around January 2021).

AMD Boosts Prices For New Ryzen 5000 Chips

With its first few generations of Ryzen desktop CPUs, chip company Advanced Micro Devices (NASDAQ:AMD) was aiming to catch up to its larger rival Intel (NASDAQ:INTC). The third-generation Ryzen 3000 series, launched in mid-2019, largely accomplished that goal, although Intel retained a single-threaded performance advantage which made its chips ideal for gaming.

With the Ryzen 5000 series, AMD claims to have finally surpassed Intel in single-threaded performance. The company touts as much as a 26% increase in gaming performance for one of its new high-end chips compared to its predecessor, and a 7% gaming performance advantage over a comparable Intel chip. AMD is claiming that the Ryzen 9 5950X offers the highest single-threaded performance of any desktop gaming processor. That would have been unthinkable just a few years ago.

Intel’s biggest selling point for PC gamers will be gone when Ryzen 5000 series chips become available for purchase on Nov. 5, at least if AMD’s numbers are to be believed. But gamers will have to pay up for AMD’s new products.

More powerful, less affordable

The notable thing about last year’s Ryzen 3000 series chips was the pricing. The six-core, 12-thread Ryzen 3600 had a suggested price of $199, and the 12-core, 24-thread Ryzen 9 3900X was priced at $499. For use cases where single-threaded performance isn’t critical, Ryzen 3000 offered a compelling value proposition.

This time around, AMD is using its newfound performance advantage over Intel to position Ryzen as a premium product. The lowest-end chip in the lineup right now is the six-core, 12-thread Ryzen 5 5600X. The 5600X will go for $299 when it launches, 50% higher than the 3600 and 25% higher than the 3600X. Meanwhile, the Ryzen 9 5900X will succeed the 3900X at a 10% premium.

Those price premiums buy customers more performance, but they got that extra performance for nothing in previous Ryzen generations. The Ryzen 3600, for example, carried the same price as its predecessor despite substantial performance gains.

AMD’s willingness to raise prices indicates that it’s confident it can continue to win market share from Intel at higher price points. Intel’s desktop processors have been hamstrung by the company’s persistent problems with its 10 nm manufacturing process. The company’s current lineup still uses its 14 nm process, and its upcoming Rocket Lake desktop processors set to launch in the first quarter of next year are also expected to be 14 nm parts.

While Intel’s 14 nm process is heavily optimized at this point, the 7 nm process from Taiwan Semiconductor Manufacturing that AMD uses for its Ryzen chips has erased the manufacturing edge Intel once had. Rocket Lake is expected to be based on a new architecture which should bring meaningful performance improvements, but it will arrive months later than AMD’s Ryzen 5000 series.

The potential for higher margins

Boosting prices could certainly help AMD’s profitability. The company has made good progress increasing its gross margin over the past few years, driven by a growing focus on premium products and a mix shift away from lower-margin game console chips thanks to surging demand for PC products.

The Ryzen 5000 launch also comes at a time when sales of PCs are booming. PC shipments surged nearly 13% in the third quarter due to the pandemic, the highest growth in more than a decade. This growth was concentrated in the laptop market, but the data only includes sales of pre-built systems. A sizable chunk of the desktop PC market is comprised of do-it-yourself systems. Given that sales of video game products are also soaring due to the pandemic, demand for gaming desktop PC chips is likely strong right now.

On the other hand, AMD’s new chips will launch into a highly uncertain economic environment. The U.S. economy has lost millions of jobs due to the pandemic, a winter wave of COVID-19 could force new economy-crippling stay-at-home orders later this year, and the prospect of additional economic stimulus is unclear. Elevated demand for PCs and gaming products may not last if something close to the worst-case scenario plays out.

Four generations in, Ryzen is proving to be a powerful weapon in AMD’s quest to chip away at Intel’s market share. Intel will fight back early next year with Rocket Lake, but AMD appears to have a clear advantage with its Ryzen 5000 series.

Intel Claims Its Cheaper To Build A Faster Gaming PC With Its 10th Gen Core CPUs Than AMD’s Ryzen 3000 CPUs, Pits The Core i7-10700K Against Ryzen 9 3900XT

Intel has released a new set of ‘real-world’ performance benchmarks that compare its 10th Gen Desktop CPUs to the AMD Ryzen 3000XT lineup. The company specifically compares the gaming performance of its chip against its competitors and claims that they offer the best performance package at a cheaper price.

Intel Claims Their Core i7-10700K Is Not Only Faster in Gaming But Also Better Value Than AMD’s Ryzen 3000XT CPUs

The new ‘real world’ performance slides were showcased during a presentation to the APAC regional market. While Intel’s own CEO has stated that the industry should move on from the generic benchmarks showcase, the recent decline in desktop processor shipments has led the company to win over the minds of consumers in the mainstream and high-performance CPU segments with a new set of comparisons. We won’t start with how much controversy the real performance benchmark slides have been marked with since their first outing but recently, the blue team has started using similar priced and similar spec’d configurations that do lead to better representation of real-world usage.

Intel Core i7-10700K 8 Core vs AMD Ryzen 9 3900XT 12 Core CPUs Compared

The comparison consists of the Intel Core i7-10700K and the AMD Ryzen 9 3900XT. The Intel Core i7-10700K is an 8 core and 16 thread CPUs with a base clock of 3.80 GHz, boost clock of up to 5.10 GHz, 20 MB of cache, and a TDP of 125W (229W PL2). The AMD Ryzen 9 3900XT is a 12 core and 24 thread CPU with a base clock of 3.80 GHz, boost clock of up to 4.70 GHz, 70 MB of cache, and a TDP of 105W.

Intel says that their Core i7-10700K CPU is cheaper with a starting price of $387 US ($409.99 US at Newegg) while the AMD Ryzen 9 3900XT CPU is expensive at $499 US ($479.99 US at Amazon). The standard Ryzen 9 3900X which is not featured in these performance metrics costs $429.99 US that is a far better deal than the Ryzen 9 3900XT which offers up to 5% better performance but for a 10 percent price difference.

Intel used similar test setups which featured an NVIDIA GeForce RTX 2080 Ti graphics card and 16 GB of DDR4 memory. The memory on the Intel platform was running at 2933 MHz speeds while the AMD platform was running 3200 MHz DIMMs. Now there’s no mention of which motherboard both setups were using but considering that X570 and Z490 boards end up costing around the same, the price difference would be close with Intel offering a slightly better value versus the Ryzen 9 3900XT.

The performance benchmarks show that the Intel Core i7-10700K outperformed the AMD Ryzen 9 3900XT in several AAA titles. The Intel CPU was up to

23% faster in Total War: Warhammer 2

17% faster in Leauge of Legends

15% faster in Monster Hunter World: Iceborne

14% faster in Rocket Leauge

12% faster in War Thunder Tank Battle

6% faster in Total War Three Kingdoms

In addition to the games where the Core i7 scored a double-digit win against the Ryzen 9 CPU, Intel also showcased several other titles where their CPU was more than 3% faster than the 3900XT. The lead was seen in 24 out of the 30 titles tested. In the remaining tests, the Ryzen 9 3900XT secured a lead and surprisingly, the AMD CPU scored better than the Intel part in CSGO which is one of the top played eSports titles. All performance tests were performed on a 1080p resolution. The slides were shared by Benchlife however the exact appendix for workload and configurations isn’t shown.

Sure Intel has a solid lead in gaming performance which was the whole purpose of this comparison but a 12 core and 24 thread CPU also has several other benefits than just gaming. The AMD Ryzen 9 3900XT can deliver better performance to game streamers and for general productivity tasks and professional uses which many gamers these days also run simultaneously on their desktop PCs.

Its an added benefit from the several cores that Ryzen CPUs provide and while Intel leads the gaming benchmarks, the gap isn’t huge, especially if we compare the standard Ryzen 9 3900X CPU which costs pretty much identical to the Core i7-10700K in the retail segment. If you’re purely building a gamer-focused machine, then Intel CPUs do retain the advantage but the real question is how long will this advantage reside with Intel with Zen 3 fast approaching the desktop PC market.

AMD Releases the A520 Chipset: For Ryzen on a Budget

AMD has officially lifted the lid on its latest entry-level chipset designed for its Zen 2-based Ryzen 3000 and 4000 processors, the A520 chipset. Following AMD’s mid-tier B550 chipset launch a few months back, A520 brings up the bottom of AMD’s now fully-modernized chipset stack, dialing down on things like PCIe speeds to allow its board partners to build more budget-friendly Ryzen 3000/4000 motherboards.

Overall, A520 offers up to twenty-six available PCIe 3.0 lanes available when paired with a Ryzen 3000 processor, with twenty of these coming directly from the CPU. And, wasting no time, numerous vendors, including ASUS, ASRock, GIGABYTE, MSI, and Biostar have already unveiled some of their entry-level A520 boards.

A520 Chipset, The A-Team For Value

The Zen 2 architecture, which is the foundation of AMD’s latest Ryzen series of processors, has been its most popular series of processors to date. One of the marquee features of the B550 and X570 chipsets is PCIe 4.0, which, although it hasn’t been fully utilized in devices like video cards and storage at present, it is still a popular feature that users hope will future-proof their systems for years to come. The A520 chipset, in turn, drops support for PCIe 4.0 speeds entirely in favor of PCIe 3.0, which is relatively easier to wire for and gives extra scope for the motherboard vendors to save costs compared to B550 and X570. The chipset officially represents AMD’s now current-generation entry-level pathway, and as always, is based around AMD’s AM4 socket.

Overall, the limitation in PCIe speeds for A520 is board-wide. Along with the chipset itself being limited to PCIe 3.0 support for both downstream and upstream, Ryzen processors will cap their on-chip PCIe lanes to PCIe 3.0 speeds when paired with an A520 motherboard. Further cutting PCIe-related costs, with A520 AMD offers less scope for lane allocation compared to B550 and X570, which allows vendors reduce the number of components required. There are only a couple of avenues in which vendors can configure their boards, such as selecting between using two PCIe 3.0 x1 slots or two SATA ports. 

Meanwhile, A520 also supports a couple of different configuration options with the 4 high-speed storage lanes coming from the host CPU. This includes building a full-fledged PCIe 3.0 x4 M.2 slot, two PCIe 3.0 x2 M.2 slots, or a PCIe 3.0 x2 M.2 slot with two additional SATA ports.

CPU x4 PCIe 3.0 NVMe

x2 PCIe 3.0 NVMe + 2 x SATA

x2 PCIe 3.0 NVMe + x2 PCIe 3.0 NVMe

Chipset 4x PCIe 3.0 + 2x PCIe 3.0

4x PCIe 3.0 + 2 SATA

The remaining PCIe lanes from the CPU, as always, are divided between a full-length PCIe 3.0 x16 slot, and a PCIe 3.0 x4 link between the CPU and the chipset.

As for USB connectivity, A520 offers support for up to four USB 3.2 G2 ports from the CPU. Meanwhile the chipset itself adds another USB 3.2 G2 port, two USB 3.2 G1 ports, and six USB 2.0 ports.

Processor Support

The A520 chipset has identical processor support to that of the mid-range B550 chipset, with official support for AMD’s current Ryzen 3000 desktop processors as well as AMD’s next-generation Ryzen 4000 processors. Like with the B550 chipset, some A520 boards come with video outputs, which all but confirms support for AMD’s Ryzen 4000 APUs – especially seeing as how A520 doesn’t officially support the earlier Ryzen 3000 or 2000 series APUs.

The AMD Ryzen 3000 CPU stack starts at the Ryzen 3 3100 with four cores and eight threads, which is currently available for $99. This would be a very suitable processor to pair with an A520 motherboard, though more aspiring builders can use any of of the Ryzen 3000 desktop CPUs, including the goliath sixteen core Ryzen 9 3950X.

The Current A520 Motherboard Stack

Accompanying AMD’s announcement of the A520 chipset, motherboard vendors have begun rolling out their respective A520 boards. The main form factor of choice for A520 thus far is Micro-ATX, with a couple of Mini-ITX boards including the ASRock A520M ITX/ac and the GIGABYTE A520I AC. Gigabyte also has the only full-sized ATX model announced so far, which is the GIGABYTE A520 Aorus Elite.

A520 boards set to hit the market include some gaming themed models that primarily come from GIGABYTE with its Aorus models, as well as the ASUS TUF Gaming A520M-Plus. The solitary offering from Biostar is aimed at business and casual users with its A520MH V6.0.

The most feature-packed A520 model looks to be the GIGABYTE A520 Aorus Elite with four memory slots that support up to 128 GB of DDR4-4400 memory, a Realtek ALC1200 HD audio codec and an unspecified Realtek Gigabit Ethernet controller. It has two full-length PCIe 3.0 slots that operate at x16/x2, with three PCIe 3.0 x1 slots and a single PCIe 3.0 x4 M.2 slots with four SATA ports.

At present, we don’t have MSRP pricing from any of the vendors, but we have reached out to each vendor individually, and we will update the above chart once we have the official pricing. It is likely that the vast majority of A520 boards will be below $150 due to B550 models starting at that price point. The A520 chipset is its most basic entry-level chipset for Ryzen 3000 processors.

Finally, today’s A520 release is a hard launch, with Newegg and other retailers already offering boards for sale. Not every board vendor has their wares on shelves just yet – as is usually the case for low-end board launches, this is a very relaxed launch – but over the coming days and weeks we should see the remainder of the A520s trickle into retail shops.

This is AMD’s secret weapon against Intel and it could change personal computing forever

AMD’s big announcement this week was the launch of the new Ryzen 3000 XT CPUs – a stopgap between the current generation and the Ryzen 4000 series expected to launch later this year.

Tucked away at the foot of the press release is mention of something that might become a fundamental part of AMD’s toolset, sooner rather than later, as it seeks to compete more sustainably with arch nemesis Intel.

TechRadar Pro covered the launch of AMD StoreMi back in April, but version 2.0 has now been announced and will include “a new caching-based acceleration algorithm that enhances data integrity and prioritizes most-used data, speeding up boot times by up to 31% and decreasing game load times by up to 13% vs an HDD only”.

The tests were carried out using a hard drive of unknown capacity (or spinning speed) with a PCIe Gen 4 NVMe SSD, again of unknown origin. In reality, you should be able to mix and match any sort of storage, regardless of whether it is magnetic or not.

No RAMDisk yet

Sadly, this version of StoreMI doesn’t (yet) include the Radeon RAMDisk, which would achieve the Holy Grail of allowing RAM and storage to mix, similar to what Intel has done with Optane.

We still believe this facility will arrive at some point, but why? Well, AMD has been working very closely with Microsoft on the brand new Xbox Series X gaming console and one of the biggest leaps in performance has come from moving to a new storage system that combines software (DirectStorage) with customized hardware. 

It turns out that DirectStorage is something Microsoft plans to bring to Windows as well, as it can reduce the processor overhead from I/O operations from multiple cores to just a fraction of a single core. While it will not be tied to any custom hardware, AMD is likely to benefit due to the modular nature of its CPU architecture.

So where does that leave us? In a not-so-distant future, one can imagine an AMD system that pools together all the available memory/storage resources, managed intelligently in a way that’s far more efficient than what Windows 10 achieves with Virtual Memory, the part of your storage component the operating system uses as RAM.

All this is pure speculation, of course, but the fact AMD has dedicated resources to StoreMi makes us optimistic about its future.

AMD leaks a Ryzen-powered Lenovo Chromebook

Evidence that OEMs have plans to release Ryzen-powered Chromebooks has been around for a while now, but AMD may have just straight-up confirmed it. Most recently, several listings for a Ryzen-powered 2-in-1 Pixelbook appeared on Geekbench, codenamed Zork.

Now, AMD may have confirmed that at least Lenovo is close to releasing a Ryzen-powered Chromebooks. Spotted by u/Jakel856 on Reddit, AMD has listed a Ryzen 5 PRO 3500U version of the Lenovo Chromebook C330 under “Shop AMD-powered laptops”.

Originally released with a MediaTek 8173C processor, the C330 seems an unlikely candidate for the first Ryzen-powered Chromebook. Priced at US$303.99, the embedded CDW listing is actually for the MediaTek model, pointing to AMD having made a mistake.

Alternatively, we may be looking at the specifications for a new 11-inch Chromebook. Lenovo did release the Chromebook C340-11 over a year ago, so it is overdue a refresh.

AMD Launches Threadripper 3990X and Ryzen 4000 ‘Renoir’ APUs

AMD capped it’s whirlwind 2019 with promises of more to come: In particular, the beastly 64-core 128-thread Threadripper 3990X that we now know will come with a suggested pricing of $3,990 and land on February 7, 2020. AMD also finally fleshed out its Ryzen Mobile 4000-series, otherwise known as “Renoir,” that mark the first x86 7nm processors to arrive for the mobile space. These new chips come with up to eight Zen 2 cores and 16 threads paired with a new revamped 7nm Vega graphics engine, marking a new level of competition for Intel in the laptop market. The first laptops come to market in Q1 2020, with over 100+ systems coming throughout the year.

AMD also announced its long-awaited Radeon RX 5600XT, which comes bearing the RDNA architecture, 7nm process, and PCIe 4.0 interface.

Threadripper 3990X

AMD’s launch of the Threadripper 3970X and 3960X marked the beginning of a new era of AMD dominance in the high end desktop market. In fact, these 24- and 32-core models have so much horsepower that Intel doesn’t even have a response, leaving AMD to dominate the high-end market.

But the Threadripper 3990X extends that lead even further. This new processor comes with the same 280W TDP as its counterparts, but sports an unheard-of 64-cores and 128 threads. These cores operate at a 2.9 GHz base but boost up to 4.3 GHz, which is only 100 MHz lower than the targeted peak speeds of the 32-core 64-thread Threadripper 3970X. They also come with an unprecedented 288MB of L3 cache.

These new chips drop into existing sTRX40 motherboards, so there are no special platform requirements. Existing Threadripper cooling solutions and recommendations also still apply, so the 280W TDP shouldn’t be too difficult to tame.

AMD shared some performance projections, highlighting that its Ryzen 3000 series lineup scales well with the addition of more cores, giving the company the uncontested lead on the desktop. The 3990X’s hefty price tag will certainly relegate the chip to the creator and professional markets where the price of the processor is a small consideration considering the monetary gain of getting more work done, faster. Intel has no competing chips even in the range of the 3990X, so AMD is free to rake in a premium for these powerful chips.

AMD Ryzen 4000 “Renoir” APU Family

AMD splits the Ryzen 4000 family up into U-series for ultrathins, H-series for gamers and creators, and Pro-series models for the professional market. AMD’s new Ryzen 4000 series follows the company’s standard naming convention for its APUs, so while these are branded as 4000-series processors, they still come with the same Zen 2 microarchitecture and the 7nm TSMC process as the desktop 3000 series. This makes them the first 7nm x86 mobile processors. The architectural advances and 7nm process should provide AMD a solid step forward on the power efficiency front that has been a sore point for the company’s previous-gen products, and the company also unveiled its new 7nm Vega engine and SmartShift technology.

U Series

The 15W U-series lineup spans from four-core four-thread models up to eight-core 16-thread models. Yes, in a departure from its normal operating procedures, AMD has disabled SMT (threading) on some SKUs, which the company says is to accommodate custom OEM models. The U-series comes with a configurable TDP that spans from 12W to 25W

AMD says the 7nm process will bring higher core counts into the 15W power envelope, which marks the first time eight cores and 16-threads has squeezed down into thin-and-light devices. The new chips also come with a redesigned 7nm Navi graphics engine that it claims boosts performance within a much more efficient power envelope. AMD says this new engine allowed it to reduce the number of Compute Units (CU) from 11 to 8, while still maintaining what is claims is leading graphics performance compared to Intel’s Ice Lake processors. Due to the optimized 7nm Vega architecture, AMD says it can wring out 59% more performance per CU.

The Ryzen 7 4800U serves as the flagship of the U-series lineup with eight cores, sixteen threads, a 1.8 GHz base, and a 4.2 GHz boost clock speed, all within the 15W power envelope. The Vega graphics engine runs at 1750MHz.

AMD presented a range of tests comparing the Ryzen 7 4800U to Intel’s Ice Lake Core i7-1065G7, highlighting leading performance in all three key metrics: single-threaded, multi-threaded, and graphics performance. However, its noteworthy the company used the CPU-intensive 3DMark TimeSpy physics test for the projection. The company also presented a range of testing in real-world 1080P gaming and creator workloads.

Power Benefits

AMD claims the benefits of the 7nm process combine with optimizations to the SoC power delivery to yield 20% lower power consumption and twice the performance per watt. Other improvements, like a 5X reduction in power state entry and exit, along with the obvious advantages of LPDDR4x memory, also provide a significant step forward for battery efficiency. AMD presented this information for the U-series products, but did not share any power information about the H- or Pro-series processors.

AMD also introduced its new SmartShift technology, which allows the platform to monitor the power states of both the GPU and CPU, both of which provide enhanced telemetry via the Infinity Fabric, to modulate power delivery in real time. This allows the platform to deliver extra power to where it’s needed depending upon the workload, which AMD claims boosts both GPU and CPU performance. AMD says this new tech only works when its CPUs are paired with an AMD discrete GPU, so it won’t work with Nvidia-based systems. The company also announced a new “A+A+A” initiative that it says designates systems with AMD CPUs, GPUs, and drivers. This is partially to encourage OEMs to allow their customers to use AMD’s full driver suite.

H-series

AMD also has an H-series lineup that comes with a 35W nominal TDP rating, but has a configurable TDP range that extends from 35W to 45W, with a 45W nominal and 54W peak power consumption.

The Ryzen 7 4800H comes with eight cores and 16-threads that operate at a 2.9 GHz base and a 4.2 GHz boost. This chip comes with 7 CUs that run at 1600 MHz. AMD also has the six-core 12-thread Ryzen 5 4600H, which has a 3.0 GHz base and 4.0 GHz boost. This processor steps back to 6CU that operate at a 1500MHz.

This family only consists of two models, but AMD claims the flagship can take on even the desktop Core i7-9700K in both creator and gaming workloads. (AMD also used 3D Mark physics test for that gaming performance comparison). AMD also shared its internal test data of several other workloads against the Core i7-9750H.

Athlon

AMD also announced the new Athlon Gold and Silver models that flesh out its lineup of Chromebook-optimized chips.

AMD’s High-End ‘Radeon RX’ Navi 21 GPU Rumors: Twice As Fast as Navi 10, 505mm2 Die Size, Faster GDDR6 Memory

It looks like AMD’s high-end Radeon RX Navi 21 GPU specifications have started to leak out with reports from the Far East claiming that the next-generation GPU could be twice as fast as Navi 10. According to the folks over at the Taiwanese based PTT platform, a high-end Navi GPU has been taped out and is headed for launch next year in 2020.

AMD’s High-End Radeon RX Graphics Card With Navi 21 GPU Rumored To Be Twice As Fast As Navi 10 – Features 505mm2 Die Size & GDDR6 Memory

The details were posted by a Chinese Forum member who lists down several key specifications of the upcoming high-end Navi GPU. We know that AMD’s current Radeon RX lineup is based on the 7nm process node and is part of the first generation RDNA family. The 2nd generation RDNA family is expected to make use of the advanced 7nm+ process node which should offer leverages in the overall performance efficiency while allowing even more dense designs to be made possible.

The rumor states that AMD’s high-end Navi GPU, which is being referred to as the Navi 21 GPU, has been taped out. The chip has a die size of 505mm2 which is twice as big as Navi 10 which has a die size of 251mm2. This is even bigger than AMD’s Vega 20 GPU which had a die size of 331mm2, so it could mean that we are looking at a powerhouse of a chip which should definitely be faster than anything AMD has released yet. The AMD Vega 20 GPU featured 13.2 Billion transistors so the Navi 21 GPU could exceed 15-16 Billion transistors which would make the chip far denser than anything else on the market.

In terms of performance, the Navi 21 GPU is said to be at least twice as fast as the Navi 10 GPU. The Radeon RX 5700 XT is the best case for the Navi 10 GPU and it comes close to the GeForce RTX 2070 SUPER, so the Navi 21 GPU could exceed the RTX 2080 SUPER’s performance and even end up coming close to the RTX 2080 Ti. This might explain why we were hearing rumors of NVIDIA’s RTX 2080 Ti SUPER in the works.

With that said, the clock speeds have not been stated yet which would play a bigger role in determining the final performance of the card. Also, it should be said that a GPU of this size would consume a lot of power and a TDP around 275-300W is a given since the Navi 10 GPU has a TDP of 225W.

It is also stated that given the huge die size of the GPU itself, the card would end up not using HBM memory but would rather rely on GDDR6 memory. This would mean that we are likely to get a 384 or 512-bit memory interface which should either give us 12 GB or 16 GB memory. AMD could go one step ahead and offer 24 or even 32 GB of GDDR6 products since this would end up being their flagship 2020 graphics product. Some of the features to expect from 2nd Generation rDNA Navi GPUs would be:

Optimized 7nm+ process node

Enthusiast-grade desktop graphics card options

Hardware-Level Ray Tracing Support

A mix of GDDR6 and HBM2 graphics cards

More power-efficient than First-Gen Navi GPUs

Nothing else is stated, but from the looks of it, this could be a very powerful graphics card, marking AMD’s proper return in the enthusiast high-end desktop GPU space. We also know for a fact that AMD is planning to introduce hardware-accelerated ray tracing as a primary featured of their next-generation RDNA powered GPUs. With that said, the information is based on a rumor so treat it with a grain of salt, but if it is true, then we might get to hear at least some form of confirmation from AMD at CES 2020 aside from the expected Radeon RX 5600 XT announcement.