Is AMD’s Radeon RX 5500 XT Hamstrung by VRAM and PCIe Bandwidth?

With the release of the budget RX 5500 XT, AMD started to fill out the bottom two-thirds of its product stack, bringing much-needed competition into that segment. The graphics card arrived with 4GB and 8GB variants ($169 and $199 respectively), and in our RX 5500 XT review, some titles–namely Forza Horizon 4, Battlefield V, Far Cry 5 and Shadow of the Tomb Raider–the 4GB variant performed considerably slower when running Ultra settings. The initial takeaway from this was the 4GB capacity is simply not enough for those titles, which is correct. However, there’s more to it.

German website pcgameshardware.de recently tested both RX 5500 XT versions using PCIe 3.0 and PCIe 4.0. The results were a bit surprising. They found that when the 4GB card’s VRAM buffer was full, running on a PCIe 4.0 slot improved performance by reducing the impact of using VRAM. The 8GB also displayed minor improvements from PCIe 3.0 to 4.0, but the 8GB capacity is what makes that card shine, since the extra memory means the card didn’t have to dump data off of the faster VRAM as often.

So why does this happen? Once the VRAM is filled up, the extra data is sent to the system RAM across the PCIe bus. The faster the bus, the faster the swapped data will get where it needs to go and not delay the next frame. The 5500 XT is wired as a PCIe 4.0 x8 card (physical tab is x16), which yields the same theoretical bandwidth as PCIe 3.0 x16. According to the article, AMD said that is sufficient for a GPU of this performance class. For all intents and purposes, this is correct. However, for games that eclipse the installed VRAM we see it can fall short.

What pcgameshardware.de showed was that during memory reads and writes, the transfer rate was effectively cut in half. Whereas PCIe 4.0 x16 would reach 12.5 GBps, x8 only reaches around 6.5-6.7 GBps — half the throughput. Memory copy speeds are not affected, as this is the transfer rate from the memory itself.

So what does this mean for performance? The improvements varied by title (and settings), but pcgameshardware.de tested Assassin’s Creed Odyssey, Battlefield V, Far Cry: New Dawn, Wolfenstein Youngblood and Shadow of the Tomb Raider. In each test, there were improvements in the 4GB card when it was running on a PCIe 4.0 system. Some were significant, others not so much. Assassin’s Creed and Far Cry appeared to benefit the most, along with Wolfenstein Youngblood. The 8GB variant also showed slight improvements across the board, but not nearly as much. 

In the end, it’s a curious choice for AMD to wire these cards in an x8 configuration, particularly with the 4GB card, which has a greater chance of running into VRAM limits. While many titles will not run into these issues, those that do can take a severe performance hit. 

With all of this information, I’m left wondering why AMD chose to configure the card in this manner? Cost perhaps? Increasing the performance of both cards can only help the optics, especially on the 4GB card, which is the most affected. This could have been spun as a reason to buy into the AMD Ryzen 3000 X570 ecosystem as a bright spot for PCIe 4.0. But instead, we are left wondering, why?

Intel discrete GPU struggles: won’t compete against NVIDIA or AMD

Raja Koduri could be the next one on Intel’s chopping block according to new rumors on DG1

Intel has been going through all sorts of issues over the last couple of years, with its troubled modem division selling to Apple for $1 billion, the constant world of hurt AMD is hitting them with in the consumer/HEDT/server CPU markets, and now it seems their discrete GPU is in trouble.

A fresh new rumor has floated online saying that Intel’s upcoming DG1 is in trouble, cementing in some of the whispers I’ve been hearing from my own industry sources. A new post on Reddit suggests that “DG1 is not shaping up very well at all”, and that the thermals and power is “looking to be dismal as Koduri is having a hard time reigning in DG1’s power envelope. Anyone that has followed his GPU designs know that this has been his downfall forever”.

A new graphics card with a 25W TDP isn’t going to get anywhere near even the lowest-end Radeon or GeForce graphics card from competitors to Intel in AMD and NVIDIA, and if DG1 is this bad at this long away from release — Koduri won’t look good at all. He was the man behind the failed launch of Radeon RX Vega under AMD and Radeon Technologies Group at the time, with the Reddit post continuing: “The real question now is should we expect to see Koduri still be working for Intel after Q3’20? If not, you can expect to never see DG2 make its way to a retail product”.

AMD may be spicing up the graphics card game with the Radeon RX 5500 XT

AMD may not keep us waiting much longer for the anticipated Radeon RX 5500, which the company unveiled back in October. The new graphics card, particularly the rumored Radeon RX 5500 XT could be coming next week, according to VideoCardz.

There’s still little detail on the Radeon RX 5500 XT. We know the 5500-series graphics cards will have 22 compute units with 1408 stream processors, a bit more than half of those found on the RX 5700 XT. The new cards will use a 128-bit memory bus (half that of the 5700-series) and offer up to 8GB of GDDR6, according to an official slide from AMD, shared by Hot Hardware.

That leaves some room for guessing as to just what the RX 5500 and 5500 XT could do. The two cards would need to differentiate somehow, and if it’s not in stream processors or clock speeds, it could be in memory. VideoCardz believes the RX 5500 may come exclusively with 4GB of GDDR6 memory while the RX 5500 XT could offer 4GB and 8GB variants, similar to the way the Nvidia GeForce GTX 1060 has come in 3GB and 6GB variants.

A contest of variety

AMD is playing a hard game against Intel on the processor side and Nvidia on the graphics processor side. This has been working for AMD in the processor fight, with Ryzen CPUs stealing market share from Intel, but Nvidia’s cards still offer the most power at the high end.

AMD has still managed to make progress against Nvidia’s market share, and these new Radeon RX 5500-series cards could help it snag even more of the budget market. Team Red will be positioned to challenge Nvidia’s GeForce GTX 1650 and 1660 models, including the Ti and Super versions. Given that Nvidia effectively has six graphics processors in the low-to-mid-range space, it makes sense for AMD to introduce more than just one version of the RX 5500.

Leaked benchmarks have shown the Radeon RX 5500 giving the GTX 1650 serious competition, and a higher-spec 5500 XT could be the card to run against the GTX 1660. If the new graphics cards come out this month, we may get to see just how the competition heats up just in time for the next big shopping rush.

The Intel-AMD Showdown

It’s beginning to look like a really interesting market. After years of essentially one player taking all the spoils, we now have a significant second that has the technology to make a difference.

The incumbent is of course Intel INTC, which just apologized to customers for continued supply constraints, despite double-digit growth in supply from the first half to the second while maintaining its raised guidance (fourth-quarter revenue of $19.2 billion and EPS of $1.28 and full-year revenue of $71 billion and EPS of $4.42).

Unanticipated increase in PC demand is not the only issue however. Intel is also trying to maintain 14nm production while ramping up 10nm. It’s building out capacity and one might argue why this wasn’t done before while there were all the product delays. But supply-demand parity hasn’t been achieved yet and management hasn’t said anything about when it will be. So that’s that.

On the other side, we have challenger Advanced Micro Devices AMD, which is announcing one breakthrough device after another, significant partnerships with companies like Alphabet’s GOOGL Google and Amazon AMZN and manufacturing alliances with leading edge foundry Taiwan Semiconductor TSM, which is making its 7nm chips.

It’s true that NVIDIA NVDA is also a player, but it doesn’t have x86 at the core and is instead focused on GPUs and the largescale parallel processing they enable. This is making waves at HPC and the data center in general (especially after the Mellanox acquisition). But NVIDIA doesn’t play in all the x86 categories like PCs, mobile computing and data center the way AMD does. So Intel and AMD are the main rivals.

Just for perspective, let’s see what the two companies brought in last quarter. Intel generated $19.19 billion in revenue and $1.42 in EPS (up $27 million and 2 cents a share, respectively, or more or less level with year-ago revenue and EPS in percentage terms). AMD brought in $1.80 billion in revenue and $0.14 in EPS (up $148 million (9.0%) and 4 cents a share (40%), respectively from last year).

So AMD’s revenue and per share earnings are about 9% of Intel’s, making it almost like a David-Goliath situation. Moreover, Intel has $12.025 billion on its balance sheet while AMD has a mere $1.209 billion. AMD also generates a small fraction of the cash flow that Intel does.

The difference in financial position between the two companies, despite the fact that AMD is growing much faster, means that Intel has significant resources to go into a very aggressive price war that AMD will find it harder to sustain. This can especially hurt the company as it preps to target mid and high-range segments (especially in mobile computing).

Intel will no doubt adopt this strategy as its design teams have fallen behind and semiconductor designs take years to develop, which is followed by qualifying the production process, ramping production and bringing to market. Intel being the foundry for its own chips has the added difficulty that it doesn’t rely on expertise that’s already available (it may however do this if necessary).

Intel’s management team is also not what it used to be. Given these difficulties and the fact that current management has admitted to a likelihood of share losses to AMD, all the traditional Intel partners have their Plan B, which is AMD. So they have at least some AMD-powered models and systems.

But what about the valuation? At 12.29X forward twelve months’ earnings, INTC shares are trading in a tight range, but below its median value over the past year. They’re also trading below the S&P 500’s 17.97X. So Intel shares are undervalued. AMD on the other hand is trading at 48.02X, which is close to the high end of the range over the past year, so it’s obviously overvalued.

That would be unless it consistently beats estimates over the next year by a significant margin. So for comparison, let’s take a look at its surprise history. The company has met estimated earnings in each of the last two quarters, it beat by 21% in the quarter prior to that and missed by 11% before that, averaging a 2.2% gain in the last four quarters. This doesn’t look too significant. If AMD doesn’t significantly top expectations in the next year, the current valuation looks untenable.

That’s why we have a Zacks Rank #3 (Hold) on AMD shares and a Zacks Rank #2 (Buy) on INTC shares. You can see the complete list of today’s Zacks #1 Rank (Strong Buy) stocks here.

AMD might unveil a new GPU with ray tracing support at CES

It’s only a matter of time before AMD rolls out a GPU architecture with hardware-based real-time ray tracing support (though not in time for any Black Friday deals), to take on Nvidia’s growing army of GeForce RTX graphics cards. But might we see something in that regard at CES in January, 2020? According to the latest rumor mill chatter, the answer is yes.

Take this with a dose of skepticism, but according to Chiphell forum member Wjm47196, AMD will be previewing a second-generation Radeon DNA (RDNA) Navi GPU architecture at the annual convention in Las Vegas, as spotted by WCCFTech.

I can see your eyes rolling in the back of the head, but there are couple of things that make this rumor at least plausible. One is Wjm47196’s track record. The same user had posted accurate information about AMD’s first generation Navi architecture before it launched. Same goes for some other GPU launches.

Secondly, we know AMD will implement ray tracing support into a future Navi GPU architecture, because both Sony’s PlayStation 5 and Microsoft’s next Xbox (Project Scarlett) will feature one, based on custom silicon.

A CES announcement also doesn’t mean the GPU would launch soon after. The past several years, AMD has announced products and disclosed roadmaps in December in January for products that eventually launched in June or July. So an initial preview even in January, six months before availability, would be perfectly in line with AMD’s previous GPUs.

Assuming nothing in AMD’s roadmap has changed, the next high-end version of Navi (Navi 20) will be built on an enhanced 7nm manufacturing process (7nm+). As these things go, we can assume it will have faster clocks and better power efficiency, probably more GPU cores, and of course ray tracing.

Beyond that, there is not much in the way of official information. There are also plenty of questions to go around, though. For example, will Navi 20 be competitive with Nvidia’s top cards, or slot in a peg or two below whatever Nvidia’s fastest cards will be at the time? How much will they cost? And will Intel have a discrete GPU for gaming when Navi 20 actually launches?

Strap in, 2020 could be a wild ride.

AMD Ryzen 9 APUs could present a new challenge for Intel

AMD has been doing everything right and is looking to parlay its recent victories into sustained success for the long haul. According to hardware leaker @Komachi, the chipmaker could do just that through the use of integrated graphics.

As TechRadar highlights, a list of processors recently published by the hardware leaker highlights four Ryzen 9 parts – two 45W units and two 15W “Pro” models – sporting a B12 designation. This, according to the leaker, indicates the presence of 12 graphical compute units.

Ryzen 9 APUs, as the publication points out, could benefit from heightened clock speeds and increased efficiency afforded by their 7-nanometer design. This, in turn, would presumably benefit graphics compute units as well. It’d be an interesting alternative to discrete graphics solutions and would of course further press Intel in the portable laptop space.

AMD has had a phenomenal 2019, recently reporting its highest quarterly revenue since 2015. The company still has a lot of ground to make up but has been steering the ship in the right direction with increased market share ever since Q2 2017.

As always, it’s worth reiterating that the post from @Komachi neither confirms nor denies anything as nothing is truly official until official word comes from the horse’s mouth. With CES 2020 less than two months away, however, the timing is certainly right for a hardware announcement.

Intel Claims Xeon Cascade Lake-AP 56 Core CPU Up To 84% Faster Than AMD’s 64 Core EPYC Rome 7742 in Real-World HPC Benchmarks

Intel has posted a series of new benchmarks claiming that their Xeon class Cascade Lake-AP CPUs run much faster than AMD’s 2nd Generation EPYC Rome CPUs. The benchmarks which Intel claims are representative of ‘real-world’ performance in the HPC segment compare Intel’s 2S (dual socket) Xeon Platinum 9282 versus AMD’s EPYCX 7742 (also in a dual-socket configuration).

The Cascade Lake-AP Xeon Platinum 9282 56 Core CPU Claims To Be 84% Faster Than AMD’s EPYC 7742 64 Core CPU in Intel’s ‘Real-World’ HPC – AI Performance Benchmark Suite

The performance metrics for both processors were posted at Medium where Intel also recently published an article about core scaling and the reliance of modern applications on the number of cores available on processors. According to Intel, 8 cores along with sustained frequencies would result in better scaling than say a 12 core or a 16 core chip. Now Intel may have provided a lot of data backing up their findings but the same Intel was reluctant to move beyond 4 cores back in 2017 when AMD was offering their Ryzen chips with up to 8 cores and 16 threads. It’s interesting how suddenly, 8 core processors have become the next-big-thing for Intel’s mainstream lineup and the same would happen with 10th Gen family which is expected to receive even more cores.

For the HPC market, Intel says that – More processor cores add compute, but the overall system or workload performance depends on other factors, including:

· The performance of each core
· Software optimizations leveraging specific instructions
· Memory bandwidth to ensure feeding of the cores
· Cluster-level scaling deployed

Anyway, coming back to the topic, Intel’s latest benchmarks compared the Xeon Platinum 9200 versus the EPYC 7742. The Xeon Platinum is one of the elusive Cascade Lake-AP processors which feature two dies instead of a single monolithic one, stacking up to 56 cores and 112 threads. The chip has a base clock of 2.60 GHz and a boost clock of 3.80 GHz along with 77 MB of cache and a TDP of 400W. The Intel Cascade Lake-AP chips feature 12 memory channels compared to AMD’s 8 memory channels per chip.

The AMD EPYC 7742 is based on a 7nm process node (vs Intel’s 14nm+++) and features 64 cores / 128 threads. The chip has a base clock of 2.25 GHz and a boost clock of 3.4 GHz with 256 MB of L3 cache, 128 PCIe Gen 4 lanes and a TDP of 225W. The pricing plays a huge role too and here we see the EPYC 7742 with a price of $6950 US while the Xeon Platinum 9282 is suggested to have a price between $25K – $50K .

So right off the bat, we can note that this isn’t a fair comparison as not only does Intel’s chip have a higher running TDP but its cost is at least 3.5x higher than the AMD processor. Yes, the EPYC 7742 is AMD’s flagship 2nd Generation Rome processor for servers but even still, this isn’t an apples to apples comparison in any possible way.

Update: ServerTheHome’s Patrick J Kennedy has found out that the GROMACS version being used by Intel is an outdated one that doesn’t utilize the 256-bit wide AVX2 SIMD instruction set that is featured on Zen 2. The GROMACS 2019.3 version was used by Intel in what they are terming as real-world benchmarks, however, the latest version available is 2019.4 which adds support for Zen 2 based EPYC Rome chips like the EPYC 7742 which Intel tested their Xeon Platinum 9282 against. It just goes off to show that even Intel’s ‘Real-World’ benchmarks aren’t indicative of actual product performance and may lead to misleading statements against competitor products. And this won’t be the first time Intel is using misleading benchmarks or statements to downplay the competition. They have termed several important performance metrics used by tech reviewers that are invalid and not indicative of actual product performance while their own performance metrics surely are.

The benchmarks show that the Xeon Platinum 9282 delivers an average performance increase of 31%, going as high as up to 84%. There are several HPC specific applications shown which Intel claims are representative of real-world performance metrics in the server market. Dissecting each application reveal the breakdown of performance in each individual workload for the benchmark and in the case of the Manufacturing application (ANSYS Fluent Workload), Intel has a 13% average performance uplift over AMD’s EPYC Rome chip. Intel also claims that having AVX-512 onboard the new Xeon chips gives them an edge in several applications such as VASP, NAMD, GROMACS, FSI & LAMMPS.

The HPC segment is broad with varying compute requirements by workload. 56 core Xeon Platinum 9282 ranges from 8% to 84% better performance (31% higher geomean) than AMD’s 64 core Rome-based system (7742) on leading real-world HPC workloads across manufacturing, life sciences, financial services and earth sciences(2).

Some of the applications and results are shown above are a geomean of several specific workloads, all with different characteristics and sensitivities. Drilling into the details of these workloads provides further insight into performance. For example, Xeon Platinum 9282 leads AMD Rome 7742 by 13% on a geomean of 14 ANSYS Fluent workloads. Across those 14 different CFD simulations, Xeon’s results range from 2% lower to 36% higher.

Medium (Intel)

Intel further goes on to claim that Xeon Platinum 9200 series processors offer a lower TCO (Total Cost of Ownership). Since the performance of Xeon Platinum 9200 series is higher, you’d have to require a fewer number of nodes which should drive down the node acquisition cost, lower fabric, switching and cabling cost. It is also mentioned that while the Xeon-AP has a higher TDP and power requirement than AMD’s EPYC Rome (225W vs 400W), it should be offset by the lower number of nodes required to reach the same performance.

Aside from raw compute power, memory bandwidth is also highlighted as the main performance measurement factor and surprisingly, major industry players are already evaluating replacing their existing Intel-based systems with EPYC processors. Just a day before Intel published their report on performance, it was revealed that Netflix may soon be switching to AMD’s EPYC based platform as the TCO is similar but the EPYC solution may actually offer higher bandwidth than an Intel Xeon based system.

There will be a lot more action next-year in the server department as AMD will launch their energy efficient 7nm+ EPYC Milan CPUs to tackle both, Intel 14nm Cooper Lake and 10nm Ice Lake lineups, simultaneously.

AMD’s comments on CPU design reflect it is in a very different place than Intel right now

While Intel is making investments to recapture process leadership, AMD says it’s more focused on architecture.

AMD just wrapped up a quarter in which it generated the highest revenue since 2005, and that in and of itself is impressive. The real story is not in the sales numbers, though. What I find more interesting than anything else, as it pertains to AMD’s current situation, is how it finds itself operating in a much different mindset than rival Intel, which for years offered up the most advanced processor technology on the market.

That is no longer the case. Yes, the newly released Core i9-9900KS is the best CPU for gaming, and before that, the regular Core i9-9900K held the distinction. But underneath the hood, it is yet another iteration of Intel’s 14-nanometer lithography (14nm++, if you’re keeping count), which dates all the way back to Intel’s 5th generation Broadwell parts released in 2014. And if you look beyond gaming, AMD is swinging more cores in the mainstream market, to hammer multi-threaded workloads. The gap will widen further when AMD’s 16-core/32-thread Ryzen 9 3950X arrives next month.

AMD’s resurgence began with the introduction of its Zen architecture. However, now in its third generation, Zen 2 (which follows Zen and Zen+) is the first to employ a 7nm process node.

Comparing process nodes from one company to another is not exactly an apples-to-apples affair, but 7nm is certainly cutting edge, and it has arrived in earnest. Meanwhile, Intel only recently (and finally) began volume shipments of its 10nm Ice Lake processors, though so far they are only available in laptops.

Process node is not everything, but it’s important. It’s also where AMD and Intel find themselves at very different places, not just in what’s currently shipping, but in where each company’s focus is, at the moment.

I bring this up because of comments made by AMD CEO Dr. Lisa Su during an earnings call with investors. She was asked to break down how much of AMD’s current advantage is really related to process technology versus the underlying architecture.

“The way I would answer that question is, we’ve made a set of choices, and the set of choices include process technology, they include architecture, our chiplet architecture, they include sort of our overall system architecture. And I think we’ve made it set of good choices,” Dr. Su said (a full transcript is available at SeekingAlpha).

“Going forward, we are not relying on process technology as the main driver. We think process technology is necessary. It’s necessary to be sort of at the leading edge of process technology. And so, today, 7-nanometer is a great node, and we’re getting a lot of benefit from it. We will transition to the 5-nanometer node at the appropriate time and get great benefit from that as well. But we’re doing a lot in architecture. And I would say, that the architecture is where we believe the highest leverage is for our product portfolio going forward.”

In a sense, Dr. Su is almost downplaying its process technology lead. It’s a luxury that would have been difficult to fathom before Zen arrived, and in particular Zen 2. But here we are, with AMD shipping 7nm CPUs and talking about architecture being where its focus is at going forward, rather than relying on process technology to be its main driver.

Contrast that sentiment with Intel, which recently said it’s “investing to recapture process leadership going forward.” I’m sure Intel would say architecture is important too, but unlike AMD, getting back on a consistent cadence with regards to its process node is where Intel is focused on.

“We’re ramping a multitude of products. We have increased confidence in 5-nanometer. And as we mentioned for 7 and 5 getting back to a two-and-a-half, two year cadence is what we’re focused on and we’re confident in the future,” Intel CEO Bob Swan said last week.

I fully expect Intel to rebound on the technology side (it already leads in sales, revenue, and market share). Getting to 10nm has been tough, largely because Intel’s goals were too aggressive at the outset. But I think things will go smoother in the next few years. We’ll see.

Regardless of how it plays out, AMD is in a good spot. Perhaps just as importantly, AMD is exuding the kind of genuine confidence that wasn’t really there in the past. AMD had that swagger in the early Athlon 64 / X2 days, and now it’s back, only this time AMD seems in a better position, having divested itself of the manufacturing side of the business. In retrospect, that move is proving to have been extremely beneficial.

AMD 2nd Gen EPYC Rome Processors Feature A Gargantuan 39.54 Billion Transistors, IO Die Pictured in Detail

The AMD 2nd Generation EPYC Rome processors launched in August & since then, we have been getting more details about the chip itself along with its features. The latest details for the I/O die which include close-up die shots have been revealed by Hardwareluxx, giving us a better look at AMD’s most innovative server chip to date.

AMD 2nd Gen EPYC Rome Processor IOD Detailed – 8.34 Billion Transistors on a Single Die, 39.54 Billion on The Entire Chip

There have been many details that AMD has just recently started revealing for their 2nd Gen EPYC Rome processors. The AMD EPYC Rome processors are composed of a 9 die design which is also to be referred to as MCM (Multi-Chip-Module). The 9 dies include eight CCD’s (Compute Core dies) & a single IOD (Input / Output die). Each CCD is composed of two CCX (Compute Core complexes) that feature four Zen 2 cores with their own respective L2 cache and a shared L3 cache. All eight CCD’s are connected to the I/O die using infinity fabric.

Each CCD measures 74mm2 and is composed of 3.9 Billion transistors. The IOD featured on Ryzen has a die size of 125mm2 and is composed of 2.09 Billion transistors. The IOD featured on EPYC is composed of 8.34 Billion transistors and measures at 416mm2. It’s the biggest die on the 2nd Gen EPYC chip. The IOD combined with the 8 Zen 2 CCD’s measure at 1008mm2 while being composed of a whooping 39.54 Billion transistors.

Now the IOD featured on the one is much bigger than the one featured on the Ryzen processors. This is due to the more demanding feature set of the server platforms. When exposed through the use of transillumination, the IO die is clearly exposed and the internal blocks can be revealed much clearly.

It is stated that much of the central space on the IOD is dedicated to the SRAM and crossbar switch while the PCIe Gen 4 interfaces can be found on the sides of the IOD. The upper and lower areas of the die feature the four 72-bit DDR4 memory channels. Now here’s the interesting part, EPYC 2nd Gen has the ability to scale up to 162 PCIe lanes by offering twice the bandwidth through PCIe Gen 4 and reducing the Infinity Fabric’s reliance on the bus to expose more lanes on custom processors that are planned for launch later. The extra PCIe lanes are already there but they are being used by the interconnect.

In addition to the IOD, the Zen 2 CCD or Compute Core Die have also received their own die shots from OC_Burner at Flickr. For those who haven’t seen them before, the following is what each core die of your Ryzen processor looks like. The same die is being used across Zen 2 based Ryzen, EPYC and the upcoming Ryzen Threadripper processors:

Also, the IOD for EPYC processor is compared to the IOD on Ryzen processors revealing just how big of a chip the server-aimed IOD is. AMD already briefed on how they will be configuring various 2nd Gen EPYC Rome processors based on different core counts. While all EPYC Rome chips have a total of 8 CCD’s, not all of them would be enabled on many processors.

Even some CCD’s have to be partially disabled such as the 16 core model. The 16 core model has only four CCD’s enabled & each CCD is partially enabled with only four cores which means there are only 2 cores enabled on each CCX. Some CCD’s have three cores enabled per CCX while others such as the 48 core variant have all 8 CCD’s enabled but two of the CCD’s have a CCX with all four cores enabled and the other with a just three cores enabled.

But this is just the start, AMD is expected to use more advanced packaging and chiplet designs in their next-generation server processors codenamed EPYC Milan and EPYC Genoa which would use the Zen 3 and Zen 4 core architecture, respectively. If everything runs smoothly for AMD and their long-term Zen roadmap in the years to come, we can see them dominating all sectors of the CPU market again. AMD’s EPYC Rome has already secured major deals with Amazon (AWS) and will also be providing 7nm Rome processors to power the Atos BullSequana XH2000 Supercomputer while a future-generation EPYC line would be powering the Frontier Supercomputer that is being built by U.S. Department of Energy and aiming deployment in 2021.

AMD Ryzen 7 3750X with 105 W TDP: Spotted at AMD

AMD has accidentally published its Product Master list – a list with all of its consumer, enterprise, and OEM parts listed. In the latest publication, it reveals multiple CPUs that could potentially be released in the future. The document, which is dated September 2019, indicates that AMD may be mulling to launch its Ryzen 7 3750X product with a 105 W TDP, though does not disclose its specifications or if it is OEM-only.

As the product number suggests, AMD’s Ryzen 7 3750X will sit between the eight-core Ryzen 7 3800X and the eight-core Ryzen 7 3700X, which leads to a suggestion that we are dealing with another eight-core CPU with SMT. Meanwhile, the base frequency difference between the Ryzen 7 3700X and 3800X is only 300 MHz, which is pretty narrow, whereas the former features a mainstream TDP of 65 W, while the latter can go all the way to 105W.

Keeping in mind that AMD already has a rather diverse Ryzen 3000 (aka Matisse) family and the fact that the company may have various ideas how to improve its product lineup, it is not particularly easy to make predictions about future products, especially if they may end up as OEM-only products, inside desktops of only one or two partners. The only thing that is certain at this point is that AMD might be considering to strengthen its desktop family if it feels necessary.

AMD’s Product Master document seems to be aimed at the company’s sales teams and contains OPN codes along with US ECCN, HTS, and CCATS codes that are required by the US export regulators. While the list mentions numerous products, at least some of them are potential or partner-specific products.