EVGA announces mammoth SR-3 DARK Xeon motherboard

American hardware maker EVGA recently unveiled the exclusive (and extremely expensive) SR-3 Dark motherboard, currently available for pre-order.

The SR-3 Dark supports the Xeon LGA 3647 socket and is one of just a handful of boards capable of handling the 28-core Intel Xeon W-3175X CPU. Priced at $1800, this massive E-ATX board offers a multitude of high-end features: a 24 Phase Digital VRM delivers clean power. Six DIMM slots provide ample room for memory and can be run in an ultra-fast hexa-channel configuration. Meanwhile, six PCIe slots provide plenty of space for peripherals. Four of these run at x16 speeds.

If you truly have cash to burn, this makes 1080 Ti quad SLI a possibility on this board (Turing cards only support two-way SLI). You’ll be in for some truly eye-opening benchmark scores and power bills.

The SR-3 DARK also features onboard temperature and voltage monitoring, and a triple BIOS switch for redundancy. All of this is backed by a 3-year warranty.

With that $1800 price tag, the SR-3 DARK is definitely not targeted towards a mainstream audience. Workstation users who need the W-3175X’s processing grunt and high-end enthusiasts are the SR-3 DARK’s target market.

AMD’s High-End ‘Radeon RX’ Navi 21 GPU Rumors: Twice As Fast as Navi 10, 505mm2 Die Size, Faster GDDR6 Memory

It looks like AMD’s high-end Radeon RX Navi 21 GPU specifications have started to leak out with reports from the Far East claiming that the next-generation GPU could be twice as fast as Navi 10. According to the folks over at the Taiwanese based PTT platform, a high-end Navi GPU has been taped out and is headed for launch next year in 2020.

AMD’s High-End Radeon RX Graphics Card With Navi 21 GPU Rumored To Be Twice As Fast As Navi 10 – Features 505mm2 Die Size & GDDR6 Memory

The details were posted by a Chinese Forum member who lists down several key specifications of the upcoming high-end Navi GPU. We know that AMD’s current Radeon RX lineup is based on the 7nm process node and is part of the first generation RDNA family. The 2nd generation RDNA family is expected to make use of the advanced 7nm+ process node which should offer leverages in the overall performance efficiency while allowing even more dense designs to be made possible.

The rumor states that AMD’s high-end Navi GPU, which is being referred to as the Navi 21 GPU, has been taped out. The chip has a die size of 505mm2 which is twice as big as Navi 10 which has a die size of 251mm2. This is even bigger than AMD’s Vega 20 GPU which had a die size of 331mm2, so it could mean that we are looking at a powerhouse of a chip which should definitely be faster than anything AMD has released yet. The AMD Vega 20 GPU featured 13.2 Billion transistors so the Navi 21 GPU could exceed 15-16 Billion transistors which would make the chip far denser than anything else on the market.

In terms of performance, the Navi 21 GPU is said to be at least twice as fast as the Navi 10 GPU. The Radeon RX 5700 XT is the best case for the Navi 10 GPU and it comes close to the GeForce RTX 2070 SUPER, so the Navi 21 GPU could exceed the RTX 2080 SUPER’s performance and even end up coming close to the RTX 2080 Ti. This might explain why we were hearing rumors of NVIDIA’s RTX 2080 Ti SUPER in the works.

With that said, the clock speeds have not been stated yet which would play a bigger role in determining the final performance of the card. Also, it should be said that a GPU of this size would consume a lot of power and a TDP around 275-300W is a given since the Navi 10 GPU has a TDP of 225W.

It is also stated that given the huge die size of the GPU itself, the card would end up not using HBM memory but would rather rely on GDDR6 memory. This would mean that we are likely to get a 384 or 512-bit memory interface which should either give us 12 GB or 16 GB memory. AMD could go one step ahead and offer 24 or even 32 GB of GDDR6 products since this would end up being their flagship 2020 graphics product. Some of the features to expect from 2nd Generation rDNA Navi GPUs would be:

Optimized 7nm+ process node

Enthusiast-grade desktop graphics card options

Hardware-Level Ray Tracing Support

A mix of GDDR6 and HBM2 graphics cards

More power-efficient than First-Gen Navi GPUs

Nothing else is stated, but from the looks of it, this could be a very powerful graphics card, marking AMD’s proper return in the enthusiast high-end desktop GPU space. We also know for a fact that AMD is planning to introduce hardware-accelerated ray tracing as a primary featured of their next-generation RDNA powered GPUs. With that said, the information is based on a rumor so treat it with a grain of salt, but if it is true, then we might get to hear at least some form of confirmation from AMD at CES 2020 aside from the expected Radeon RX 5600 XT announcement.

Intel Could Split 10th-Gen Comet Lake CPUs Into Two Different Sockets

In what seems to be an unprecedented move, even for Intel, Comet Lake might end up taking residence on not one, but two new sockets: Chip detective @momomo_us recently discovered a string in the latest version of CPU-Z that references the LGA1159 socket, which may complement the LGA1200 socket that we already know is coming for Comet Lake.

One of the very first leaked Comet Lake slides associated the upcoming 14nm chips with the LGA1200 socket, and recent pictures of the chips confirm the pin count. 

Now CPU-Z screenshots of Comet Lake processors show an LGA1159 socket, which is backed up by the discovery of the socket in CPU-Z’s identification strings. Pictures of purported LGA1159 Comet Lake CPUs have also emerged. 

At this point, it’s unclear if the new socket is real, but if it is, we have a few theories of what Intel could be up to.

We suspect that Intel plans to split support for Comet Lake between different platforms, with the 125W K-series chips likely requiring the LGA1200 socket while the remaining 65W and 35W Comet Lake parts will get by with the LGA1159 socket. This separation sounds reasonable as the high-end Comet Lake chips have steeper power requirements and the extra pins in the LGA1200 socket can provide the necessary juice.

If our assumption is valid, Intel could potentially position the LGA1200 socket as a “value HEDT” (high-end desktop) platform. We hope the chipmaker doesn’t roll with this strategy as it wouldn’t fare well with enthusiasts, considering that rival AMD has already brought HEDT-like performance to mainstream motherboards with the Ryzen 9 3950X 16-core chip that continues to leverage the advantages of the company’s long-lived AM4 socket.

A couple of days ago, Iranian news outlet Tnews shared two very interesting Comet Lake slides that help support our assumption. The first slide shows three divisions for Comet Lake: Enthusiast (125W), Mainstream (65W) and Low Power (35W). The K-series which, in all likelihood is comprised of the Core i9-10900K, i7-10700K and i5-10600K, can be configured to 95W at lower clock speeds. However, the real nugget lays in the second slide.

The second slide clearly shows that the Comet Lake chipsets are divided into two major categories. The W480, Q470, Z490 and H470 chipsets allegedly employ the CML PCH-H chipset, while the lower-end B460 and H410 chipsets are seemingly based of the CML PCH-V chipset. It’s not confirmed, but we think the ‘H’ stands for High-Performance while the ‘V’ alludes to Value. 

It’s plausible that W480, Q470, Z490 and H470 motherboards come with the LGA1200 socket, while the B460 and H410 motherboards utilize the LGA1159 socket. It’s conceivable that the 65W and 35W Comet Lake chips would work fine on any Intel 400-series motherboard, but the high-end 125W parts are likely only compatible with the four high-performance models.

There is little doubt that there will be two sockets for Comet Lake, we’re just not sure how Intel is going to sell that idea to consumers.

Intel DG1 GPU is Tiger Lake in Discrete Form Factor, Features 96 EUs and 768 Shader Units

Information already rumored has just been confirmed by an EEC entry filed by Intel (KOMACHI via Videocardz): the DG1 graphics card will have 96EUs. If the DG1 follows the design pattern of Tiger Lake (and there is every indication it will) then each EU will have 8 shading units for a total of 768 shader units per DG1. This is the exact same count as TGL and matches the information we know so far: that DG1 is essentially TGL in a discrete form factor.

Intel’s DG1 graphics card is essentially discrete Tiger Lake graphics with 96 EUs

Before we go any further, here are the leaked codenames from the EEC filing:

DG1 External FRD1 96EU Accessory Kit (Alpha) Development Kit (DGD12KEF3A)

Discrete Graphics 96EU DG1 8+2 Windows External PROD HOST SDP (Alpha) (DGD12SEH4A)

Discrete Graphics 96EU DG1 6+2 Windows External PROD HOST SDP (Alpha) (DGD12SEH3A)

An EEC filing usually means that a prototype is on its way to being finalized and usually precedes arrival on shelves by around 5-6 months. This means you are likely going to be seeing DG1 break cover at or around June/July in 2020. Likely launch dates for this card is Computex 2020 or Gamescom 2020. DG1 graphics marks Intel’s first attempt at making a discrete GPU and the nomenclature used in this EEC filing suggests this is a development board of some kind and likely one in an external closure.

Now, for some fun educated speculation (readers that are here only for facts, this is your cue to jump off): while we cannot calculate the upper bounds of performance so easily, we can estimate minimum performance and this GPU (at the very least) should be able to achieve 2 instructions per clock – if not more. At a minimum clock speed of 1400 MHz, you are looking at 2.1 TFLOPs. If you estimate a more liberal 1700 MHz then it becomes 2.6 TFLOPs. Either way, you are looking at a GPU that will perform between 2.1 to 3 TFLOPs, and depending on IPC gains, will have roughly the performance level of a GTX 1650.

If priced correctly (read: throwaway pricing) this has the potential to seriously disrupt the market and simultaneously make a name for Intel as a GPU manufacturer. This, of course, depends on whether or not the company can get AIBs on board to produce this in volume and rumors would indicate that the company is having difficulty doing this. That said, considering Intel has ample financial resources to throw at the problem, I am sure AIBs would jump on board provided Intel subsidizes the adventure enough – which they might just do.

In any case, 2020 is going to be a make or break year for Intel with its first 10nm processors and its first discrete GPU slated for arrival in 2020. The company’s former marketing chief, Chris Hook, slated arrival of the GPU by late 2020, which means we probably won’t be seeing much of it at CES (sneak peeks excepted), if at all. And while we are on the topic of marketing, Intel has promoted our good friend (and former colleague) Ashraf Eassa to the post of marketing manager for discrete graphics.

AMD Ryzen 9 3950X review

The AMD Ryzen 9 3950X is the baddest cat in town when it comes to processors that don’t land in the HEDT (high-end desktop) category of processors. But, with the specs and performance it offers, it may break apart that classification system altogether.

The Ryzen 9 3950X is built on AMD’s 7nm Zen 2 architecture, and is the highest tier in the Ryzen 3000 series. That naturally makes it the most expensive, but it also shows just how well AMD is scaling performance to multiple cores and threads. And when it comes to the Ryzen 9 3950X, there are a whopping 16 cores and 32 threads.

Price and availability 

The Ryzen 9 3950X is priced at $749 (£749, about AU$1,080), but availability since launch has been scarce. As a result, we’ve seen prices well above AMD’s suggested retail. Even at retail, that’s a sizable increase in price over most of our top picks for best processor.

The price is a 50% increase from the $499 Ryzen 9 3900X, which features 12 cores and 24 threads (and comes with its own cooler). Meanwhile, from Intel, the 18-core, 36-thread Core i9 9980XE is made to hurt a budget at its original retail $1,979 (about £1,520, AU$2,800). The newer Intel Core i9-10980XE brings the price down to $979 (£1,099, about AU$1,435), showing how Intel is acknowledging AMD’s competition.

Features and chipset 

The AMD Ryzen 9 3950X’s leap onto the Zen 2 architecture, which has seen the processor node shrink from 12nm to 7nm. AMD also has packed on extra cache with 1MB of L1, 8MB of L2, and 64MB of L3 cache. The result is an impressive CPU that packs 16 cores and 32 threads into a unit that can still fit on the standard AM4 socket (the very same socket AMD has been using since the first generation of Ryzen processors) and only hits a TDP of 105 watts. 

Along with the new architecture, the Ryzen 9 3950X brings with it support for the new PCIe 4.0 standard on an X570 chipset. The new PCIe standard can offer a dramatic increase in bandwidth for extra fast PCIe SSDs and Navi graphics cards. But, for anyone who doesn’t need the extras just yet, the Ryzen 9 3950X can be slotted into an older motherboard. 

Performance 

From single-core to multi-threaded workloads, the Ryzen 9 3950X is a powerful performer. It also shows just how well more cores can scale to more performance in some ways but not in others.

Now, one thing we have to mention, is that when AMD sent the Ryzen 9 3950X for review, it was sent in a pre-built tower from Maingear. Instead of testing the processor in that system, we pulled it out of there and put it on our own TechRadar test system, just so all the elements of the system are controlled by us, rather than Maingear and AMD. 

The Ryzen 9 3950X’s 16 cores run at a base clock speed of 3.5GHz, but they can boost up to 4.7GHz. That many cores running that fast can chew through multi-core workloads, as we saw in our benchmarks. In our Geekbench 4 test, it earned a single-core score of 5,770 and a multi-core score of 53,829. That compares to the 32-core Ryzen Threadripper 2990WX from 2018 that scored only 4,862 and 30,777 on the same tests.

Cinebench R15 and R20 further demonstrated the Ryzen 9 3950X’s performance. In Cinebench R15, it scored 212 points for single-core performance and 4008 for multi-core. That tops that Threadripper 2990WX in single-core, but falls shy of its 5,017 multi-core score – but remember that the Threadripper has double the cores and threads.

Intel isn’t sitting safe from the Ryzen 9 3950X either. Even the HEDT processors are in trouble, as the Intel Core i9-10980XE, a 18-core/36-thread monster, didn’t muster better results in our benchmark suite. For Geekbench 4, Intel’s chip managed 5,452 in single-core and 52,762 in multi-core testing. Both of those scores fall short of the Ryzen 9 3950X. AMD stays ahead in Cinebench R20 as well, with a 9,171 multi-core score that tops the Core i9-10980XE’s 8,730.

It takes some serious power draw to do this, but it’s improved over 2nd-Gen Threadripper. In our testing, the Ryzen 9 3950X’s power draw maxed out just below 144 watts. That’s high for a mainstream processor, but on the lower side for some of the HEDT processors that it can compete with. There’s a perk of the 7nm architecture showing itself in that relationship between performance and power draw.

For all its cores, though, the Ryzen 9 3950X only proves one thing in gaming: Ryzen 3000 processors are all great. From the Ryzen 9 3950X down through the Ryzen 5 3600, we see surprisingly similar gaming performance from the Ryzen 3000 family. Running our benchmarks alongside a Nvidia GeForce GTX 1080 Ti, the Ryzen 9 3950X scored an average of 103fps in Total War: Warhammer II at Full HD and 43fps at 4K. The Ryzen 5 3600 managed 101fps and 42.3fps in the same tests. And Intel? The Core i9-9900KS manages 108fps at Full HD and 45fps at 4K in the same tests.

The story continues in our Middle Earth: Shadow of War benchmark. There, the Ryzen 9 3950X scored an average of 116fps at Full HD and 49fps at 4K. The Ryzen 5 3600 hit 118fps and 51fps for the same tests, shockingly beating the Ryzen 9 3950X. The Intel Core i9-9900K came out slightly ahead with 125fps at Full HD and 52fps at 4K.

The Ryzen 9 3950X clearly has high-end performance, but for gamers specifically, you can get similar performance for almost a quarter of the price. And you should also be aware that with this kind of power comes some serious heat. When we were being briefed on this processor, AMD straight up told us that we’d need at least a 240mm AIO cooler, which is what we used for this review. Even with our amazing NZXT Kraken X62 cooler, we still managed to see temperatures spike at 78°C – it kind of makes the lack of an included cooler make sense.

Final verdict 

The Ryzen 9 3950X is a performance champ, going toe-to-toe with HEDT processors that should be in a different league. It may take a bit to keep cool, but if you follow AMD’s guidance, you’ll end up with a processor that takes on gaming and multi-core workloads with aplomb. The biggest issue is that inventory is scarce. 

Save $50 on this factory overclocked Radeon RX 5700 XT graphics card

Stop going mad fiddling with settings just to get your games to run on an old graphics. Treat yourself to a GPU upgrade this holiday season instead. Need ideas? One option is MSI’s Radeon RX 5700 XT Evoke OC. After factoring in the available discounts, you can bring this card home from Newegg for $369.99 right now.

That’s $50 below the card’s list price. Part of the savings comes by way of a $30 mail-in-rebate. Sure, those can be a bit of a hassle, but it makes this an enticing deal. The other part of the bargain involves entering coupon code 93XPE4 at checkout, to knock $20 off the price.

Those discounts combined make this one the least expensive Radeon RX 5700 XT cards on the market at the moment. It’s a good deal because you’re getting a custom model with a factory overclock—MSI set the base clock at 1,670MHz and the boost clock at 1,925MHz, up from 1,605MHz and 1,905MHz, respectively.

Granted, we’re not talking about a crazy high overclock here. But hey, it basically amounts to a free performance boost, with MSI taking all the risk. And overclocked settings aside, the Radeon RX 5700 XT is around 5-10 percent faster than Nvidia’s GeForce RTX 2060 Super, making it one of the best graphics cards for the money.

This card also comes with a couple of bonuses. Namely, it qualifies for three free months of Xbox Game Pass for PC, and your choice of of Borderlands 3 or Tom Clancy’s Ghost Recon Breakpoint, as part of AMD’s “Raise the Game” bundle. So, there’s some added value here.

If you don’t like the design of this card, MSI’s Radeon RX 5700 XT Mech OC model is on sale for the same price, also by way of a $30 mail-in-rebate and a $20 coupon code (VGASAV43Z). It sports the same clockspeeds and comes with the same bundled extras, just with a different custom cooler design.

NVIDIA Shows That Their GeForce RTX GPUs Are Much Faster & Powerful Than Next-Gen Consoles

At their recent GTC China 2019 conference, NVIDIA’s CEO, Jensen Huang, showcased that their GeForce RTX GPUs are much faster than the next-generation consoles which include the Xbox Series X and the PlayStation 5.

NVIDIA Shows That Their GeForce RTX GPUs Are Much Faster & Powerful Than Next-Gen Consoles

During the presentation, NVIDIA’s CEO showcased a slide that compared an older Maxwell-based gaming laptop with GeForce GTX GPU to a new Turing-based gaming laptop featuring the GeForce RTX GPU. The specific point of comparing the two laptops was to showcase the Max-Q technology which has revolutionized the laptop gaming market, offering discrete GPU levels of performance at a fraction of the power budget, delivering higher battery times and portability that previously was not possible on older gaming notebooks.

The slide compares a Lenovo Y900 Gaming laptop with GeForce GTX 980M to a newer Lenovo Y740 Gaming laptop that is equipped with a GeForce RTX 2080 Max-Q graphics chip. In the slide, aside from just listing down and comparing the dimensions and weight of the two laptops, NVIDIA stated that the GeForce RTX 2080 Max-Q powered gaming notebook is faster than the next-gen consoles that are expected to debut next year in 2020.

Earlier this month, Microsoft announced their latest gaming consoles, the Xbox Series X. Featuring much better specifications than the Xbox One X and driving up to 4K 60 FPS with the possibility of 120 FPS in a select number of titles and enabling three key features, Variable Refresh Rate, 8K, and support for hardware-accelerated ray-tracing. The console will be powered by AMD’s RDNA GPU & Zen 2 CPU cores with the exact specifications currently unknown but the Navi GPU featured on the Xbox Series X console is said to be the fastest of any Navi GPU shipping today, including the $400 US, Radeon RX 5700 XT which is currently AMD’s fastest Navi based graphics card. The said card is also missing hardware-accelerated ray tracing which is a feature to be added in its second iteration known as RDNA 2.

The GeForce RTX 2080 Max-Q is the best of the best mobility GPU that you can find in a gaming notebook. The non-Max-Q variant is even faster but you don’t get the portability of the Max-Q variant with that GPU due to its higher power input and thermal requirement. In terms of GPU performance, if the next-generation consoles are powerful than the Radeon RX 5700 XT, then it would also mean that they are powerful than the GeForce RTX 2070 which is almost on par with the Radeon RX 5700 XT. The RTX 2070 SUPER is much faster and ends up on par with the RTX 2080 (non-SUPER) so that’s why I didn’t compare that card.

Now NVIDIA was fast to hit back the console market because not only has the console market been a primary rival for them for ages since NVIDIA started offering their own mainstream and high-end solutions in the notebook market, but the underlying chip architecture found on these consoles is made by their biggest competitor in the graphics market, AMD. Also, NVIDIA was the first to GPU-accelerated ray tracing for the mainstream and enthusiast segment with their GeForce RTX lineup which launched back in 2018.

Since then, NVIDIA has refined the quality of ray tracing and feature sets such as DLSS and in several titles while AMD’s first show in the ray-tracing era has yet to be publicly demonstrated.

NVIDIA states that notebook gaming is one of the fastest-growing markets and is even growing at a faster rate than the desktop segment. Jensen stated that there are currently a total of 200 million active gamers around the globe who are running NVIDIA GeForce GPUs, & while most of them are desktop users, the notebook segment is increasing every day.

It will be interesting to see AMD’s ray tracing in action on the new RDNA GPU powered next-generation consoles but in terms of raw GPU performance, desktop discrete graphics cards such as even AMD’s own enthusiast lineup which comes out next year would be much faster. With all of that said, by the time the next-gen consoles launch, NVIDIA would be releasing their first 7nm GPUs which are expected to bring a huge update in terms of overall performance, including ray-tracing specific enhancements so stay tuned for those too.

AMD Radeon RX 5600 XT 6 GB Graphics Card 3DMark Benchmarks Leak Out – Faster Than The GeForce GTX 1660 Ti

The first performance benchmarks of AMD’s Radeon RX 5600 XT 6 GB graphics card have leaked out by Videocardz. The upcoming graphics card would part of the third series in the RX 5000 line which is powered by AMD’s Navi graphics architecture.

AMD Radeon RX 5600 XT Performance Benchmarks Leak Out – Much Faster Than The GTX 1660 Ti, Could End Up Close To The GTX 1080

The AMD Radeon RX 5600 XT has seen various specifications leaks so far. The card is expected to offer a new GPU which is neither Navi 10 or Navi 14. It could be a new SKU configured for the RX 5600 XT and would have a different naming scheme than the GPUs featured on the RX 5700 and the RX 5500 series cards. In terms of core count, the stream processors would fall somewhere in between the RX 5500 XT’s 1408 and RX 5700’s 2304 SPs. We can get 1920-2048 stream processors but that is just speculation at this point.

In a tweet, Videocardz has stated that the card would feature 6 GB of GDDR6 memory which was apparent from the listings but that does raise the question, were the 8 GB models just an error or would we get to see those too. Regardless, the 6 GB GDDR6 memory would be supplemented by a 192-bit bus interface and memory clocks would be rated at 12 Gbps.

This is lower than the 14 Gbps featured on current Navi desktop offerings. It is possible that AMD could be featuring lower-spec’d dies to reduce the overall pricing of the RX 5600 XT as GDDR6 prices play a huge part in the overall cost of current-generation graphics cards. This would give us 288 GB/s bandwidth which is still a lot more than the 224 GB/s bandwidth featured on the RX 5500 XT.

The card that was tested seems to feature an average clock speed of around 1600 MHz and in the performance was tested in various 3DMark tests. Following is the complete breakdown of the RX 5600 XT graphics scores in various benchmarks:

RX 5600 XT 3DMark Time Spy – 6446 Graphics Score, 5884 Overall Score

RX 5600 XT 3DMark Time Spy Extreme – 2815 Graphics Score, 2576 Overall Score

RX 5600 XT 3DMark Time Spy Extreme – 2791 Graphics Score, 2891 Overall Score

RX 5600 XT Fire Strike – 18906 Graphics Score, 14341 Overall Score

RX 5600 XT Fire Strike Extreme – 9198 Graphics Score, 8087 Overall Score

RX 5600 XT Fire Strike Ultra – 4738 Graphics Score, 4672 Overall Score

RX 5600 XT Fire Strike Ultra – 4756 Graphics Score, 4688 Overall Score

AMD RX 5600 XT 3DMark Benchmarks (Image Credits: Videocardz):

A whole list of performance benchmarks has also been shared by TUM_APISAK which compares the card with the RX 5500 XT.

If you see the scores and compare it to the Radeon RX 5500 XT, you are looking at up to 30% better performance which puts this card in the same performance tier as the GTX 1070 Ti or the Radeon RX Vega 56. What AMD could be doing with the Radeon RX 5600 XT is offering a GeForce GTX 1660 Ti competitor with performance that almost matches the GTX 1080. Also, these are synthetic work loads so gaming performance would definitely differ from these results but the card still looks like it would give the GTX 1660 TI a tough time which is NVIDIA’s fastest Turing based GTX offering at the moment.

Pricing would be super crucial for the Radeon RX 5600 XT as we have already seen the Radeon RX 5500 XT 8 GB models with an MSRP of $200 US being priced up to $239 US. The Radeon RX 5600 XT can definitely fill in the spot at $279-$299 US as the successor to the RX 590. The card is expected to launch at CES 2020 which is a couple of weeks from now so we will be getting more information very soon. 

GeIL Evo X II AMD-Edition: A 16 GB, 3600 MHz Kit Optimized For Ryzen CPUs

GeIL EVO X II AMD-Edition kit which is optimized for Ryzen CPUs offers good specs with speeds of 3600 MHz, RGB LEDs, and a 16GB capacity for just $84.99. These RGB LEDs are compatible with Aura SYNC, RGB Fusion, Mystic Light, and Polychrome SYNC technologies which allow for custom colors and color effects using vendor-specific applications.

The GeIL EVO 16 GB, 3600 MHz Memory Kit Is Tuned For Ryzen CPUs and Costs Just $84.99 US

This RAM is perfect for Ryzen 3000 series processors because of the faster speed being 3600 MHz. The timings are a bit of a downside with a CL18 rating (18-20-20-40) but for the price and features, the GeIL EVO X II is still one great memory to consider if you are building a Ryzen PC.

These DIMMs are 60mm-tall with RGB lighting on the top and sides. This gives a nice effect since you can see the EVO X label glow with the LEDs too. The RGB lighting can be controlled through all major motherboard manufacturer’s RGB lighting controls. The parts of this module that aren’t able to light up are covered in gunmetal gray color allowing for the overall design and aesthetics of this case to remain intact.

Some highlighted features of the GeIL EVO X II-AMD edition memory include:

Featuring the latest “Cableless RGB Illumination Design”

Addressable RGB illumination providing a stunning lighting effect

Perfectly supports ASUS AURA lighting control app

Supports Gigabyte Fusion, MSI Mystic Light, and ASRock Polychrome on selected motherboards

Specifically designed for AMD Ryzen platform providing excellent performance and compatibility

The main feature of the Geil EVO X RAM kit is the performance it offers having both XMP 2.0 support and a premium heat spreader. The XMP 2.0 Profile allows this ram to reach the rated speed of 3600 MHz with the slower RAM timings of 18-20-20-40 while the premium heat spreader provides for more than adequate heat transfer and dispersion, making sure that your RAM will stay cool and operate to its fullest capabilities.

While this RAM costs $84.99, which is not the cheapest but with the overall design and amazing RGB effects that this RAM offer does make it a good deal. The only major downside is the timings but they are also easily overclockable and able to maintain the added heat from the overclock due to the high-quality heat spreader.

EKWB Announces Their Quantum Series Block For Gigabyte X570 Series

EK, the European premium liquid cooling gear manufacturer, is releasing the eagerly awaited AMD AM4 socket based monoblock belonging to the Quantum Line of products, the EK-Quantum Momentum Aorus X570 Elite D-RGB.

The Latest and Greatest Generation of the Quantum Engine is Used to Cool The AM4 Chip

This monoblock is engineered specifically for the Gigabyte X570 Aorus Elite motherboards. The addressable D-RGB LED in the monoblock is compatible with RGB Fusion 2.0 Sync RGB control and offering a full lighting customization experience for every single diode at any given time. This is a complete all-in-one (CPU and motherboard) liquid cooling solution for Ryzen AMD processors and the monoblock is compatible with the AM4 socketed Gigabyte X570 Aorus Elite motherboard.

This monoblock uses the latest generation of EK cooling engine used on the Quantum Lineup to ensure the best possible CPU cooling while not reducing flow to other components. This water block directly cools the AM4 CPU, as well as the voltage regulation (MOSFET) module. Liquid flows directly over all critical areas, providing the enthusiasts with a great solution for high and stable overclocks.

Like with every EK monoblock, the EK-Quantum Momentum Aorus X570 Elite D-RGB features high flow design and it can be easily used with the system using weaker water pumps or lower pump speed settings as well. The Quantum Momentum monoblock also comes with sophisticated D-RGB (addressable) lightning which connects to a standard 3-pin 5V D-RGB addressable header.

The base of the monoblock is made of nickel-plated electrolytic copper while the top is made of quality acrylic glass material. The nickel-plated brass screw-in standoffs are pre-installed and allow for easy installation. The block is already equipped with 3-pin 5V Addressable D-RGB LED strip which is compatible with the motherboard 3-pin 5V Addressable D-RGB LED header. The arrow marking on the 3-pin LED connector is to be aligned with the +5V marking on the addressable D-RGB header.

EK-Quantum Momentum Aorus X570 Elite D-RGB monoblock is made in Slovenia, Europe and is readily available for pre-order through the EK Webshop and Partner Reseller Network. This Quantum Line monoblock will start shipping on the 24th of November 2019. The price of it is currently $154.99.