Snapdragon 875 crushes Samsung’s first 5nm chip in benchmark leak

Qualcomm’s next flagship chipset, the Snapdragon 875, will likely be unveiled on December 1. Leaker Abhishek Yadav has apparently managed to get hold of the chip’s AnTuTu scores and they are quite impressive.

The chip is apparently codenamed Lahaina and per the leaker, it scored 847,868 on the benchmarking website. This makes it around 25 percent faster than the Vivo iQOO 5 Pro, which is powered by the Snapdragon 865 and is the current top scorer, as noted by folks over at GSM Arena.

The Snapdragon 875 will reportedly be made using the 5nm process, which will give it a huge performance and power efficiency advantage over the current premium chip which is based on the 7nm fabrication technology. 

The alleged scores also leave Samsung’s first 5nm chip, the Exynos 1080, in the dust, but doesn’t really come as a surprise, as it is a mid-range chip.

The result also suggests that Snapdragon 875-powered phones will have a huge lead over Huawei phones fueled by the in-house Kirin 9000, such as the Mate 40 Pro. This again is hardly surprising as although the Kirin 9000 is also a 5nm SoC, it features older Arm cores.

The Snapdragon 875, on the other hand, will likely have the new Cortex-X1 and Cortex-A78 CPUs. Samsung’s upcoming flagship silicon, the Exynos 2100, is also expected to employ Arm’s new design, and leaked benchmark results imply the two will be pretty similar in performance.

Both chips are expected to have one Cortex-X1 core, three Cortex-A78 cores, and four Cortex-A55 cores. 

The Snapdragon 875 is also expected to have the X60 5G modem onboard. 

AMD Ryzen 5 5600X ends Intel’s hegemony in single-thread perf; 10% faster than Core i9-10900K and 23% faster than Ryzen 9 3900X

Intel has traditionally held the single-core performance crown for many generations. AMD did get close to Intel 9th gen Coffee Lake, but Comet Lake once again showed good leads in single-core and gaming. With Ryzen 5000 Zen 3, Intel may not be in a comfortable position any longer.

User /u/losh11 (@loshan1212) first posted CPU-Z results of the Ryzen 5 5600X on Reddit. The OP had since deleted the post, but the results were shared quickly to other subreddits. CPU-Z screenshots and product packaging of the Ryzen 5 5600X were subsequently reposted by @GawroskiT on Twitter.

In the CPU-Z single-thread benchmark, the Ryzen 5 5600X scored 643 points leading the Core i9-10900K by 10% and the Core i7-10700K by 15%. The Zen 3 processor also seems to be 23.4% faster than a Ryzen 9 3900X in single-core, which shows that AMD is leaving no stone unturned in the quest to extract the maximum possible peak performance from the new architecture.

The Ryzen 5 5600X seems to be no slouch in the multi-core either with the processor amassing 4,814 points — 12% faster than the Core i5-10600KF and 14% faster than the Ryzen 5 3600XT.

This is not the first demonstration of AMD’s new found single-core leads, though. Previously, we saw the AMD Ryzen 9 5950X and the Ryzen 5 5600X posting impressive single-core results beating Intel’s flagship Core i9-10900K by a significant margin.

While it is always advisable to wait for final reviews before coming to conclusions, these initial scores do indicate that AMD may have a definite winner on their hands with Zen 3 that does not compromise either in single-core or multi-core workloads.

Seagate: We Are On Track with 20TB HAMR HDDs in December

Heat-assisted magnetic recording (HAMR) technology is expected to enable rather massive increase of hard drives capacity in the coming years. It took the industry several decades to develop this technology and its mass roll-out was delayed several times. In fact, Western Digital even decided to use energy-assisted perpendicular magnetic recording (ePMR) and microwave-assisted (MAMR) technologies before moving on to HAMR. Yet, its rival Seagate is on track to introduce HAMR-based HDDs this year. 

At its earnings conference last week Seagate reiterated plans to ship its 20 TB HAMR HDDs this December. Initially, the drives will be available to select customers as well as inside Seagate’s Lyve modular storage systems. Seagate appears to be so confident of its HAMR technology that it also reiterated plans to introduce 50 TB hard drives in 2026. 

“We remain on track to ship 20-TBHAMR drives starting in December, which is an important milestone, as we believe HAMR technology will be the industry’s path to scaling a real density and increasing drive capacities,” said Dave Mosley, CEO of Seagate. “Seagate will be the first to ship this crucial technology with a path to deliver 50-TB HAMR drives forecast in 2026.” 

At present, Western Digital offers its Ultrastar DC HC650 20 TB HDD, but this drive uses energy-assisted shingled magnetic recording (SMR) technology with all of its peculiarities when it comes to re-writing data. Therefore, Seagate’s 20 TB HAMR drives will offer numerous performance advantages when compared to their direct rival, at least for customers who can get these HDDs. 

Since HAMR HDDs will use new heads and new platters, it will take some time for Seagate to fully ramp up production of new hard drives. Furthermore, technology transition will cost money as Seagate will have to upgrade some of its manufacturing equipment.

Nvidia RTX 3070 review: AMD’s stopwatch just started ticking a lot louder

Talking about the RTX 3070, Nvidia’s latest $499 GPU launching Thursday, October 29, is tricky in terms of the timing of today’s review embargo. As of right now, the RTX 3070 is the finest GPU in this price sector by a large margin. In 24 hours, that could change—perhaps drastically.

Ahead of AMD’s big October 28 event, dedicated to its RDNA 2 GPU line, Nvidia gave us an RTX 3070 Founders Edition to test however we saw fit. This is the GPU Nvidia absolutely needed to reveal before AMD shows up in (expectedly) the same price and power range.

Inside of an Nvidia-only bubble, this new GPU is a sensation. Pretty much every major RTX 2000-series card overshot with proprietary promises instead of offering brute force worth its inflated costs. Yet without AMD nipping at its heels, Nvidia’s annoying strategy seemed to be the right call: the company established the RTX series’ exclusive bonus processing cores as a major industry option without opposition, then got to wait a full year before competing with significant power jumps and delectable price cuts.

Last month’s RTX 3080 saw that strategy bear incredible fruit—even if ordering that $699 GPU is still seemingly impossible. But what happens when Nvidia scales down the Ampere 7nm promise to a $499 product that more people can afford? And how will that compare to whatever AMD likely has to offer in the same range?

Future-proofing around the 1440p threshold

We can only answer some of those questions today. (Until Nvidia proves otherwise, we assume that availability will continue to be a massive asterisk for this and all other RTX 3000-series cards.) In good news, at least, the RTX 3070 gets off to a roaring start by rendering its 2019 sibling, the RTX 2070 Super, moot. Both debuted at $499, but the newer option typically approaches, and occasionally bests, the RTX 2080 Ti (whose $1,199 MSRP in 2018 sure feels like a kick in the ray-traced teeth nowadays).

But RTX 3070’s price-to-performance ratio comes with one significant caveat: a not-so-future-proofed VRAM capacity of 8GB, shipping in the not-as-blistering category of GDDR6. That matches the best RTX 2000-series cards but is surpassed by higher-speed GDDR6x VRAM in pricier RTX 3000-series GPUs.

The thing is, “future-proofed” for PC gaming is relative. What’s going to matter in 3D processing in the near future, both for the games you love and the systems you run them on? If you’re set on having the crispest native 4K rendering for the foreseeable future, the RTX doesn’t leapfrog over the 2080 Ti, particularly with a VRAM allotment that could stress any games that ship with 4K-specific texture packs.

But if you’re favoring a lower-resolution panel, perhaps 1440p or a widescreen 1440p variant—and Steam’s worldwide stats make that a safe assumption—then your version of future-proofing revolves more around processing power and ray-tracing potential. In those respects, the RTX 3070 currently looks like the tippy-top option for a “top-of-the-line” 1440p system… with the bonus of Nvidia’s Deep Learning Super-Sampling (DLSS) for surprisingly competitive fidelity in 4K resolutions, should gamers upgrade their monitor between now and the next GPU generation. (Until AMD shows us otherwise, Nvidia’s proprietary DLSS 2.0 pipeline remains the industry’s leading upscaling option, and game studios have started embracing it in droves.)

In other words, if you’re more interested in high frame rates on resolutions less than 4K, and you want GPU overkill for such a CPU-bound gaming scenario, the RTX 3070 is this year’s best breathing-room option for the price… at least, unless AMD announces an even more compelling proposition on October 28.

Strong, but not the 2080 Ti topper we expected

The above collection of game benchmarks mostly mirrors the ones I used for my RTX 3080 review, and once again, these tests err on the side of graphical overkill. You may have zero interest in using an RTX 3070 with 4K resolutions or maximum graphical slider values, and that’s understandable. Instead, these tests are designed to stress the GPU as much as possible to present the clearest comparisons between the listed cards. Look less at the FPS values and more at the relative percentages of difference. (The exception comes from “DLSS” tests, which I’ll get to.)

Even though this year’s $499 RTX 3070 clearly exceeds the power of last year’s $699 RTX 2080 Super, I tested it against last year’s $499 RTX 2070 Super as well to show exactly what a difference a year makes in terms of price-to-power proposition. The percentage differential between the 70-suffix GPUs varies based on what kind of software you’re testing, but the most massive surge in performance can be found when ray-tracing effects are toggled at pure 4K resolution. Wolfenstein Youngblood, in particular, sees the 3070 double the 2070 Super’s frame rates in its ray-tracing benchmarks.

While Nvidia has made benchmarking claims that put the RTX 3070 ahead of the RTX 2080 Ti, that doesn’t necessarily bear out in my testing—but this is because the RTX 2080 Ti Founders Edition shipped in 2018 with a remarkable capacity for safe overclocking. The 3070 FE, like its 2070 Super sibling, seriously lacks headroom for such safe overclocking for either its core or memory clocks, as managed by tests-at-every-step automation by programs such as EVGA X1. Testing was nearly identical on the 3070 with or without a scant EVGA X1 overclock applied, and as such, I’ve left its OC tests out of this roundup. Remember: as Nvidia’s Founders Editions go, generally, so do other vendors’ variants. So we’re not sure other vendors will squeeze much more out of the same model.

Thus, the 2080 Ti still pulls ahead in most, but not all, of the above gaming benchmarks, whether ray tracing is or isn’t enabled. When comparing both cards’ specs, this difference checks out, since the newer 3070 cuts back on certain components for efficiency’s sake (not to mention that dip in VRAM capacity). Categories like Tensor cores and RT cores are listed as “newer-generation” versions for the 3070, and the bigger 3000-series cards beat the 2080 Ti both in quantity and generation, so they get the clearer wins. The 3070 finally sees that efficiency trade fail to win out in certain testing scenarios—nothing tragic, mind you, but worth noting in case you’d hoped for across-the-board wins against the 2080 Ti. That’s 184 “third-generation” Tensor cores in the 3070, versus 544 older Tensor cores in the 2080 Ti, and 46 “second-generation” RT cores in the 3070, versus 68 older RT cores in the 2080 Ti.

Size, ports, noise

The RTX 3070’s efficiency figures into its size reduction, down to 9.5 inches in length (242mm) from the RTX 2070 Super’s 10.5 inches (but not quite as small as the original RTX 2070’s 9-inch length). Like other 3000-series FEs, the RTX 3070 utilizes what Nvidia calls a “flow-through” design that pulls cool air from below and pushes hot air out in two directions: through its “blower,” out the same side as its DisplayPort and HDMI connections, and upward in the same direction as your motherboard’s other components. Basically, the size reduction may help you cram an RTX 3070 into a smaller case, but you’ll still want to guarantee considerable airflow.

Speaking of connections, they’re identical to what you’ll find on the RTX 3080: three for DisplayPort, one for HDMI 2.1. (If you missed it, Nvidia quietly dumped the VR-friendly USB Type-C “VirtualLink” port found in most RTX 2000-series cards from this year’s GPU generation, perhaps owing to how few VR headset manufacturers bothered supporting it.) Additionally, the 3070 continues the RTX 3000-series trend of employing a smaller 12-pin connector for power, though it ships with an adapter for today’s common 8-pin PSU standard. In the 3070’s case, it only requires one 8-pin connection to a PSU, not two (or a mix of 8-pin and 6-pin), even though it maxes out at a 220W power draw. (The 2070 Super requires one 8-pin and one 6-pin connector with a power-draw maximum of 215W.)

And when Nvidia brags that the RTX 3070 runs quieter, the company means it. While I lack solid decibel-measuring equipment to tell you exactly how much quieter this card runs than its competition, it’s safe to say that its full-load mix of fan noise and operational hum probably won’t be the loudest component in your system. And with my ear directly up to it, its noticeable noise certainly wasn’t louder than, say, a PlayStation 4 Pro. (Nvidia has described its noise level as “up to 16dBA quieter” than the original RTX 2070 Founders Edition.)

Thoughts on 1440p, ray tracing, and DLSS

The above benchmarks make clear that 4K/60fps performance in newer PC games, with all settings maxed out, isn’t a given on the RTX 3070. But it’s important to note that many of these tests include overkill settings for things like anti-aliasing, shadow resolution, and even “maximum” ray-tracing effects, all meant to guarantee maximum GPU impact for the sake of accurate comparisons between the GPUs. In the real world, you can safely drop most of these from “ultra,” “extreme,” or “insane” while still exceeding most console ports’ settings and barely looking discernible from their over-the-top maximums, and the results often land darned close to 4K/60.

Scale down to a resolution like 1440p and you’ll hope for frame rates that take advantage of monitors rated for 144fps and above. One good indicator of the RTX 3070’s capabilities is Borderlands 3, a particularly demanding (and arguably inefficient) game that doesn’t leverage Nvidia-specific GPU perks while packing its scenes with dynamic lighting, alpha particle effects, cel-shaded detail, and massive draw distances. When put through its benchmark wringer at 1440p on my testing rig (i7-8700K OC’ed to 4.7GHz, 32GB DDR-3000 RAM), BL3 averages 99.5fps at the “high” settings preset or 88.0fps at “ultra.” Not 144fps, mind you, but I think of BL3 as a good “floor” for performance, easily outdone by older and more efficient 3D games.

Without ray tracing turned on in 3D games from the past few years, RTX 3070’s frame rates have easily surpassed 80fps with tons of bells and whistles enabled at 1440p resolution, and they’ve easily gone higher with every drop in settings from there. But what happens on the RTX 3070 with ray tracing turned on?

As of press time, there’s an interesting combined trend for just about everything I’ve tested with some version of DirectX Ray Tracing (DXR): the harmonious pairing of Nvidia’s latest “DLSS 2.0” standard. Should you run GPU-pounders like last year’s Control or this month’s Watch Dogs Legion at near-max settings and 1440p resolution, plus ray tracing enabled, you can expect frame rates at roughly 50 to 55fps on the RTX 3070. But a funny thing has happened with DLSS 2.0: much improved support for DLSS upscaling from 906p to 1440p. Last year, I would’ve told you that you were crazy to upscale from anything lower than 1440p, in terms of pixel smudginess.

When testing at 1440p, Control has seen its DLSS 2.0 translation of tiny details, particularly text on posters, improve compared to native rendering plus temporal anti-aliasing (TAA). Meanwhile, WDL’s benchmark is keen on adding rain to its mix, which is clever on Ubisoft’s part; this is the exact kind of detail that DLSS has struggled to render in games like Death Stranding, yet in this newer game, rain materializes almost identically when its 906p signal is upscaled with DLSS’ machine-learning wizardry.

With both of these games’ DLSS modes toggled at this 906p upscale, frame rates jump to the 78-84fps range… and that’s with ray tracing enabled (“high” RT settings in Control, “medium” RT settings in WDL).

A masterful game of GPU dominoes

Nvidia really couldn’t have set these dominoes up any better. Its RTX line of GPUs has separate components to handle the above fancy features—dedicated ray-tracing cores and dedicated “tensor” cores to handle ML-assisted computation. The way its ray-tracing cores work lines up neatly with industrywide standards like DXR, which means it’s a drop in the programming budget to implement those in ways that will work on competitors’ GPUs and on brand-new gaming consoles. And the tensor cores’ upscaling methods line up neatly with TAA, a particularly common anti-aliasing standard that Nvidia’s DLSS effectively piggybacks off. As of DLSS 2.0, the model does not require game-specific coding to work (though developers still have to partner with Nvidia to implement it). For Nvidia gamers, then, the ray-tracing proposition going forward is clear: if you want to turn it on, you’ll almost certainly have the simultaneous option of toggling the efficiency of Nvidia’s dedicated RT cores and the efficiency of their DLSS implementation. In terms of pixel fidelity, DLSS 2.0 has pretty much proven to be a wash, with games generally enjoying a mix of sharper and blurrier elements depending on the scene (with neither being egregious, with the notable exception of Death Stranding’s peskiest, super-detailed moments like cut-scene details and screen-filling rain). And that’s a wash visually, not computationally; the proof is in the frame-rate pudding.

We still don’t know if AMD can possibly compete when its future cards have their ray-tracing modes turned on. Maybe we’re in for a $500-ish scenario where AMD can beat Nvidia’s rendering performance in a game like Borderlands 3 at a better price-to-performance ratio, only to lose out on the same performance gains with ray tracing turned on. Having tested Watch Dogs Legion over the past week, I can safely say its RT perks—as slathered over a massive, open-world city full of reflective surfaces and other handsome light-bounce effects—are difficult to disable now that I have a midrange GPU that can reasonably handle said effects at “1440p.”

Meaning, I could turn them off… but I no longer want to. It’s hard to go back to plain ol’ rasterization after seeing so many light sources realistically emerge no matter what time of day or scenario I’m in. As I pilot a drone past a shiny office building, or drive in a shiny, future-London car past a beautiful landmark, I see objects in WDL reflect or bounce light in ways that acknowledge objects or light sources that otherwise aren’t on the screen. This is what ray tracing does: it accounts for every nearby light bounce, even if it’s not on screen, to render the whole world whether you can see it directly or not.

Plus, if you have dreams of one day toggling ray-tracing power at 4K with this card, WDL on an RTX 3070 at “high” settings gets up to a 58fps average in 4K resolution with RT at “medium,” so long as I use DLSS to upscale to 4K from… wait for it… 1440p native. Those upscaling results are convincingly crisp, as well.

Thus, as I said in the beginning, your definition of a “future-proofed” GPU will likely drive your interest in what the RTX 3070 has to offer for $499. We’re about to see even more interesting ray tracing in games—including at least one we’re not allowed to talk about yet. You’ll have to take our word for it, in terms of how exciting it is to live inside of some games’ ray-traced worlds.

If that’s not your bag, due to visual preferences or budgetary reasons, I get it. But it remains to be seen whether a cheaper RTX card can deliver the same future-proofing in the 1080p range or whether AMD will arrive with a perfect amount of budget-minded power and ray tracing—or even a butt-kicker of a card that skips ray tracing altogether in favor of powerful, traditional 3D rendering for a damned good price. For now, in the 1440p range, Nvidia has the clear lead… for at least 24 hour

Sapphire Slides Out A Small Form Factor Pulse RX 5500 XT SF

Sapphire is no stranger to thinking outside of the box when it comes to their graphics card designs. They were one of the first to go all-in on vapor chamber cooling back in their Vapor-X days, they went wild with flowthrough designs for the Fury cards, and they really went to town on the R9 285 with an ITX version. Now they’ve taken that passion and delivered on the baby Navi 14 chip with their small form factor focused Sapphire Pulse RX 5500 XT SF.

The design of the Sapphire Pulse RX 5500 XT SF delivers the full RX 5500 XT design in just 177.2mm in length. Staying within the 2 slot design signature of their Pulse cards means the owner doesn’t have to make concessions with most small form factor cases on the market. Taking the cooling aspect down a notch from the full-size Pulse RX 5500 XT we see the SF sporting aa single fan cooler but as cool as the Navi 14 die runs it shouldn’t have any trouble keeping the 135w TDP in check. One thing that might have been a good move for the SFF crown would have been if the 8-pin PCIe power connector was on the rear of the card.

When it comes to speeds and feeds of the Sapphire Pulse RX 5500 XT SF we see basically the same specifications as the full-size Pulse RX 5500 XT with the 1408 Navi based Stream Processors running at a Base Block speed of 1685MHz, a Game Clock of 1737MHz, and a Boost Clock of 1845MHz. This model is available in both 4GB and 8GB models, both in the 14Gbps flavor on the 128-bit bus resulting in a memory bandwidth of 224GB/s making 1080p gaming a breeze.

I know we’re on the cusp of a new lineup coming from the Red Team but it could be some time still before we see the more cost-conscious crowd getting some attention. While I applaud Sapphire for delivering something unique to the SFF enthusiasts I can’t help but wonder why not go wild and do this with the 5600XT?

MSI MPG B550 Gaming Edge Motherboard $20 Off at B&H

As we get closer to Black Friday, vendors are dropping deals on our favorite hardware. We’re doing our best to sift through them all, organizing them into a huge list of the best deals on tech that we update every day.

Finding a good motherboard deal is often harder compared to other components. Balancing budget and performance is critical when building a desktop, and you definitely don’t want to skimp on your motherboard. Check out our list of best motherboards for 2020 to see what’s leading the motherboard market today.

Today we’re highlighting a discount on the MSI MPG B550 Gaming Edge Wi-Fi motherboard. It’s usually listed at $179 but is available today for just $159.

This motherboard from MSI is designed with an ATX form factor. It uses an AMD B550 Chipset and supports processors using an AM4 socket.

This edition can support four modules of DDR4 with speeds up to 4866MHz. You can add the ambiance of custom RGB lighting using the Mystic Light Extension. 

Visit the MSI B550 Gaming Edge Wi-Fi motherboard product page on the B&H website for checkout options on this deal. It’s currently listed without an expiration. 

Custom ASUS ROG STRIX Radeon RX 6800 XT “AMD Big Navi GPU” Reportedly Boosts Beyond 2.5 GHz With 290W TGP

More information regarding the clock speeds for the AMD Radeon RX 6800 XT custom “Big Navi” graphics cards have leaked out by Patrick Schur. In his latest tweet, Patrick talks about the AMD Radeon graphics card, the ROG STRIX Radeon RX 6800 XT from ASUS, which reportedly comes with boost clocks beyond 2.5 GHz.

ASUS ROG STRIX Radeon RX 6800 XT With AMD’s Big Navi “Navi 21 XT” GPU Reportedly Features Boost Clocks Beyond 2.5 GHz at 290W TGP

The information is based on engineering board samples so final clocks and TGPs may vary but it is interesting just how far AIBs have managed to push AMD’s Big Navi GPUs, even if internally. ASUS’s ROG STRIX Radeon RX 6800 XT is the highlighted model today & there are three systems mentioned by Patrick that have been tested within 3DMark 11, all with variable clock speeds. The RX 6800 XT is reportedly running with a set TGP of 290W and a 1000 MHz memory clock which represents 16.00 Gbps memory dies.

ASUS ROG STRIX Radeon RX 6800 XT Custom Graphics Card Reported Clocks:

Average – 2291 / 2289 / 2092 MHz

Median – 2373 / 2301 / 2320 MHz

Max (Boost) – 2556 / 2394 / 2489 MHz

There are also entries spotted by Patrick at 255W TGP which are also listed below:

Average – 2298 / 1993 MHz

Median – 2357 / 2342 MHz

Max (Boost) – 2509 / 2557 MHz

The first thing mentioned by the leaker is that the drivers had some issues which are why the clock speeds for the custom AMD Radeon RX 6800 XT graphics card are not as consistent as they should be. The large variances in average clock speeds should be resolved by the time the cards hit the market.

Also, the difference between the 255W & 290W TGPs may not look much but it is possible that the higher TGP cards will deliver more overclocking head-room and stable clock speeds. 3DMark 11 isn’t something that we should consider a heavy work-load for 2020’s graphics cards so more recent applications need to be evaluated before these clock speeds can be represented for the final models.

Once again, the 290W & 255W figures for the AMD Radeon RX 6800 XT graphics are TGPs that AMD basis on GPU and VRAM power alone. It is not the equivalent of NVIDIA’s TGP which is the figure for the entire board. Igor had also stated this and a 290W TGP variant should effectively end up with a total board power of around 350-355 Watts.

We are more or less getting the custom variants in November if these reports are accurate but the top variants from each AIB can be expected around the end of November or mid-December based on the availability of the Big Navi GPUs for AMD’s Radeon RX 6800 XT graphics card.

AMD Radeon RX 6800 XT With Navi 21 XT & Radeon RX 6800 With Navi 21 XL GPUs

The AMD Radeon RX 6800 series will feature two RDNA 2 based graphics cards, the Navi 21 “Big Navi” based Radeon RX 6800 XT and the Radeon RX 6800. The AMD Radeon RX 6800 XT will come packed with the Navi 21 XT GPU which is a cut-down SKU featuring 72 Compute Units or 4608 SPs. The card will also feature 16 GB of GDDR6 memory across a 256-bit bus interface, a 512 GB/s total bandwidth, and clock speeds of 2015 MHz base and 2250 MHz boost at reference specs. Yesterday, it was reported that the Navi 21 XT could feature TBPs as high as 320W for reference and 355W for custom variants.

The AMD Radeon RX 6800 graphics card will feature an even more cut down Navi 21 “Big Navi” GPU with 64 Compute Units of 4096 stream processors. The card will feature the same VRAM config of 16 GB GDDR6 memory, a 256-bit bus interface but it was reported that the card will end up with a slightly lower memory clock of 14 Gbps which will deliver a net bandwidth of 448 GB/s. The GPU is going to feature clock speeds of 1815 MHz game and 2105 MHz boost clocks at reference specs. It was also stated that the AMD Radeon RX 6800 with Navi 21 XL GPU could feature a TBP of 290W at stock clocks.

AMD will officially be unveiling its RX 6000 series graphics card family on October 28th. The second half of 2020 would definitely be interesting times for all the hardware enthusiasts and mainstream PC gamers who are looking forward to upgrading their PCs with the best hardware.

AMD Ryzen 9 5950X & Ryzen 9 5900X “Zen 3” CPU Benchmarks Leak Out, 16 Cores At 5 GHz & Huge Single-Threaded Performance Jump

The latest benchmarks of AMD’s upcoming Ryzen 9 5950X and Ryzen 9 5900X “Zen 3” CPUs have leaked out in the Geekbench database (via TUM_APISAK). The Ryzen 9 series processors will feature the highest core count available in the Ryzen 5000 series lineup and will be aimed at the enthusiast gaming market.

AMD Ryzen 9 5950X 16 Core & Ryzen 9 5900X 12 Core CPU Benchmarks Leak Out, Huge Boost In Single-Core Performance

The benchmarks for the AMD Ryzen 9 5950X and Ryzen 9 5900X show both chips running at close to 5 GHz frequencies which is pretty impressive for such high core count chips. Both Ryzen 9 chips were running on different platforms. The Ryzen 9 5950X was featured on an ASUS ROG STRIX X570I Gaming motherboard with 16 GB (DDR4-3866) memory while the Ryzen 9 5900X was featured on MSI’s X570 GODLIKE motherboard with 16 GB (DDR4-3600) memory.

In Geekbench 5, the AMD Ryzen 9 5950X scored 1575 points in the single-threaded &13,605 points in the multi-threaded tests. The chip was running at clock speeds close to the 5 GHz mark with the maximum reported clock speed hitting 4.983 GHz in the multi-threaded test while the CPU hit 5.01 GHz in the single-core test.

The AMD Ryzen 9 5900X scored 1605 points in the single-threaded &12,869 points in the multi-threaded tests. Once again, the chip was boosting beyond its rated boost clocks with the highest core clock reported at 4.947 GHz in the multi-threaded and 4.95 GHz in the single-threaded test.

When it comes to performance comparison, the AMD Ryzen 9 5950X is around 20% faster than the AMD Ryzen 9 3950X in the single-threaded and around 10% faster in multi-threaded tests. AMD’s Ryzen 9 5900X is also around 20-25% faster in single-core but only around 6-8% faster on average in multi-core tests compared to its predecessor, the Ryzen 9 3900X.

Compared to Intel’s flagship Core i9-10900K CPU, the AMD Ryzen 9 5900X offers a 15-20% single-core performance bump and around a 15% performance bump in multi-threaded tests.

AMD Ryzen 9 5950X “Zen 3” Desktop CPU – 16 Cores / 32 Threads Up To 4.9 GHz For $799 US

Starting off with the top of the line parts, AMD is announcing the Ryzen 9 series which is made up of the Ryzen 9 5950X. The Ryzen 9 5950X is the flagship 16 core part which is aimed at users demanding heavy multi-threading performance on AM4 sockets.

The chip has a total cache of 72 MB and a TDP of 105W. The chip features a boost clock of up to 4.9 GHz boost which when put together is just stunning for a 16 core part. The AMD Ryzen 9 5950X is going to cost $799 US and will be available on the 5th of November like the rest of the lineup.

AMD Ryzen 9 5900X “Zen 3” Desktop CPU – 12 Cores / 24 Threads Up To 4.8 GHz For $549 US

Next up, we have the Ryzen 9 5900X which is a 12 core part that is aimed at users demanding heavy multi-threading performance on AM4 sockets. The chip has a total cache of 70 MB and a TDP of 105W. The chip features a base clock of 3.7 GHz and a boost clock of up to 4.8 GHz boost which is faster than the Ryzen 9 3900XT. The AMD Ryzen 9 5900X will feature a retail price of $549 US which is $50 US higher than the MSRP of the Ryzen 9 3900XT while delivering significantly faster performance.

In performance comparisons, the AMD Ryzen 9 5900X completely obliterates the Intel Core i9-10900K, delivering insane amounts of up to 15% single-threaded performance jump over the competing chip within Cinebench R20. The same goes for gaming performance where the AMD Ryzen 9 5900X delivers up to 21% performance gains over the Core i9-10900K.

The AMD Ryzen 5000 Desktop CPUs, codenamed Vermeer, will be launching on the 5th of November. The launch lineup will include the 16 core Ryzen 9 5950X, the 12 core Ryzen 9 5900X, the 8 core Ryzen 7 5800X, and the 6 core Ryzen 5 5600X. You can learn more about these SKUs here.

All 500 series motherboards (X570/B550) will be receiving BIOS updates to support the next-gen lineup so if you’re planning to get a Ryzen 5000 CPU at launch, you better head over to this link and grab one that’s supported for your motherboard (do note that 400-series support comes later around January 2021).

Asus Lists Specs for Non-OC RTX 3070 Cards

As we gear up for the Nvidia GeForce RTX 3070’s October 29th launch, Asus has updated its website with specs for its non-OC (or non-overclocked) RTX 3070 models, including the RGB triple-fan ROG Strix Gaming unit. There are three non-OC Asus RTX 3070 GPUs overall, and despite different shrouds, aesthetics and cooling approaches, they all have a 1755 MHz boost clock and 8GB of GDDR6 memory running at a 14 Gbps speed.

The most visually impressive of the three cards is the Asus RTX 3070 8GB ROG Strix. This unit has three fans and RGB throughout, though that makes it fairly large. It takes up 2.9 slots and is 12.53 inches long.

If RGB isn’t your main thing, you can also opt for the Asus RTX 2070 8GB TUF, which also has three fans. The width here is 2.7 slots and the card is a little shorter than its Strix cousin at 11.81 inches long. There is still RGB here, though it’s relegated to one corner of the card.

Finally, there’s the Asus RTX 3070 8GB Dual, which has just two fans. This card also occupies 2.7 slots, but keeping the cooling design to two fans means it comes in at just 10.51 inches long. There’s also a physical switch on this card to choose between high and low fan speeds, and one of the card’s corners also has an LED strip for decoration. The Dual also has an OC version on the way, though Asus has yet to announce its clock speed specs.

All of these cards are also compatible with Asus’ GPU Tweak II software, which is how you’ll adjust the cards’ overclocking settings. 

Keep up with RTX card info as it drops by checking our guide on where and how to buy RTX 3000 series cards.

Gigabyte Lists Four New GeForce RTX 3070 Graphics Cards

As we get closer to the RTX 3070 launch on October 29th, Gigabyte has begun to reveal its custom RTX 3070 SKUs, including the Aorus Master, Gaming, Eagle, and the new Vision series.

The RTX 3070 will be Nvidia’s first true mid-range GPU for the Ampere generation, featuring 5888 CUDA cores and 8GB of GDDR6 (non-X) on a 256-bit bus. The most intriguing feature of the 3070 is its significantly lower TDP of 220W, similar to that of the RTX 20 series cards, which should make cooling this GPU much easier. We think the RTX 3070 will be roughly 40% slower than the RTX 3080, according to the specifications. However, that’s just an educated guess. Be sure to check our review of the RTX 3070 next week for full performance results.

The Aorus Master is Gigabyte’s ‘luxury’ series of graphics cards, equipped with more bells and whistles over the Gigabyte branded cards. The Aorus master RTX 3070 features an almost quadruple-slot sized heatsink, with a triple-fan cooler design. The inner fan spins in the opposite direction of the outer fans to help boost airflow. The inner fan is also recessed inwards, with the two outer fans on top overlapping the inner fan’s blades. This is another technique Gigabyte uses to improve airflow efficiency.

Aesthetically, the Aorus Master is one of the prettiest cards to date, featuring a black shroud with dark aluminum accents and dashes of RGB on the top and bottom of the card. Perhaps the most intriguing feature of the Master is the LCD display on the card’s edge. The LCD can display a multitude of things, including temperature data from the CPU and GPU, custom text, or a custom image/GIF.

We don’t know anything about the card’s internals, like power delivery, but typically Aorus-branded cards do have an upgraded power delivery system over Gigabyte’s cheaper lineups, so expect good overclocking performance out of this card.

Gigabyte’s Vision series is a new sub-brand aimed towards content creators and professionals instead of the traditional gaming markets. The RTX 3070 Vision features a beautiful white and silver finish. There’s almost nothing ‘gamery’ about this card — which is a good thing.

The RTX 3070 Vision sports a triple-fan cooler design, similar to the Windforce coolers featured in the Eagle and Gaming lineups. The cooler has four copper heat pipes running down both lengths of the card, connected to an aluminum heatsink. The five heatpipes, plus an additional heatpipe to the side of the cooler, all meet up at the GPU die to cool it. The surrounding VRM and memory units are cooled by a metal baseplate. 

Aesthetically, this is another gorgeous card; the front is shaded in metal and complemented by a white shroud around the sides. The backplate is all metal as well and features a cutout at the end to push air through the heatsink, which is typical of Ampere graphics cards. The card is equipped with an 8-pin and a 6-pin connector for supplementary power from the PSU.

Of course, this card is perfectly capable of gaming as well as content creation. The aesthetic should fit in gaming builds aiming for a more mature look.

The Gigabyte RTX 3070 Gaming and Eagle cards are similar to the existing RTX 3080 variants, but feature a smaller form factor in terms of height. The 3070 Eagle is roughly 10mm shorter and the Gaming, 5mm shorter than the equivalent RTX 3080 SKUs.

The Gaming features the same heatsink and heatpipe design as the RTX 3080 vision, but the Eagle takes that down a notch to five heat pipes instead of six.

Due to the shorter height, both cards look more uniform, with no jagged edges to be seen. Besides that, the design language remains the same for both cards. The gaming has a black should, complemented by silver accents to the top right and bottom left of the cooler. The Eagle has a more generic design, where the shroud is a greyish black complemented by a blue Eagle logo.

Both cards are equipped with an 8-pin and 6-pin power connectors, which seems to be the common configuration for the RTX 3070.