ADDLINK S90 1TB PCIE 4.0 NVME SSD REVIEW

It may not look like much, but the Addlink S90 is a tiny revolutionary. A small drive trying to upset the status quo. Conventional wisdom says quality storage only comes from a few big players. Names like Samsung, Western Digital, Crucial, Intel et al. Addlink doesn’t have much respect for the old ways though, so following the success of its S70 (which makes a prominent appearance in our best NVMe SSD guide), the S90 moves things up a level by offering speedy PCIe 4.0 throughput. Which isn’t bad for a company predominantly seen as a budget player.

If you’ve followed the shift to PCIe 4.0 at all, you’ll have spotted that every drive out there currently uses the Phison E16 controller, whether that’s the Corsair Force MP600, Gigabyte Aorus SSD, or Seagate FireCuda 520. And sure enough that’s what you’ll find underneath the sticker here, alongside the equally ubiquitous Toshiba 3D TLC NAND flash memory. They’re not all exactly the same, but they’re close enough to be almost indistinguishable.

It’s worth noting at this point that you’ll need a Ryzen 3000 processor and an accompanying X570 or B550 motherboard to get the most from this drive. If you’ve sided with Intel, then the drive will work, but you’ll be stuck with the limits of PCIe 3.0 (3.4GB/s), and while you can future proof yourself a little by buying a drive like this, next-gen drives do come with a premium attached, so weigh that one up carefully. Especially as we know Samsung has a PCIe 4.0 drive on the way.

Unlike its budget-friendly predecessor, the Addlink S90 is only available in 1TB or 2TB trim. There’s no 256GB or 512GB model here designed to be paired with a beefy hard drive or SATA SSD. It also means that while both drives do offer great value for money, they still represent a high initial outlay. You’re looking at $196 (£199) for the 1TB model, or $399 (£398) for the 2TB drive. That’s still only 20 cents a gigabyte, but there are a lot of gigabytes involved.

This is a budget SSD, make no mistake. This is evident from the packaging and the fact that there is nothing else to sweeten the deal. No software, no utilities to monitor drive performance, and nothing in the way of cooling (although Addlink does offer another drive, the H90, which is essentially the same as the S90, but ships with a heatsink).

NVMe SSDs are known to get hot, and any motherboard worth its salt will have some form of heatsink present. The Gigabyte X579 Aorus Master used for testing has heatsinks on every M.2 slot, and this is what I’ve used for testing. Even so, the S90 peaked at 69 C during testing, which is high, and right at the limit of the suggested operating temperature of 70 C. I didn’t see any throttling though, even when really pushing it.

In terms of performance there’s a lot to like about the Addlink S90, as there is any PCIe 4.0 NVMe SSD. In the synthetic benchmarks it gives even the best PCIe 3.0 SSDs a sound thrashing, with the sequential reads almost doubling. The write performance is also about 1GB/s better. The real world performance can’t quite live up to these figures, but the Addlink S90 still manages to shave a few seconds off. 

There’s not a whole lot between any of the PCIe 4.0 SSDs we’ve tested of late, although the Sabrent Rocket NVMe 4.0 did produce slightly better numbers pretty much across the board. Not enough to recommend that drive over this in performance-terms alone, but there is a bit more of a difference between these drives.

The problem for Addlink is that it’s thrown everything at the budget angle of the S90, and when you take that away, you’re left with a drive that doesn’t have too much going for it. In essence it’s a decent drive. Good performance, strong value for money, and industry standard components. 

The problem is, the market has responded to its release and has dropped prices accordingly. So you can now pick up the likes of the Corsair MP600 Force for $200 (£190), which performs pretty much identically, but comes with a funky heatsink—which can be useful if your motherboard doesn’t have one.

A bigger problem to the likes of Corsair and Gigabyte dropping the price of their PCIe 4.0 drives comes from another budget operator, Sabrent. The Sabrent Rocket 1TB NVMe 4 SSD also rolls in at $199, but can often be picked up for less, and ships with a cooler, has its own toolbox utility, and bundles Acronis True Image. All for the same price. That makes for a pretty easy win for the Sabrent drive.

This is great news for anyone actually buying a next-gen SSD, because it now means there is plenty of choice out there, all at similar price levels. Unfortunately this will only last while there is real competition, and we’re not sure how long this will last. Unless Addlink can respond to such price drops in kind, it’s hard to suggest that this is the drive you should buy. Which is a shame, because without the Addlink S90 we possibly wouldn’t have such cheap next-gen drives. As it is, the Addlink S90 feels like the kingmaker, not the actual king.

Tested: X570 Motherboards Can Overjuice Ryzen, But Rarely Do

HWinfo claims that X570 motherboards from a variety of manufacturers are guilty of underreporting power to Ryzen CPUs so the chips will go faster at stock settings, but at the possible expense of chip longevity. It doesn’t appear that AMD condones the misreporting. However, in response, AMD said that it was investigating the issue, but it doesn’t believe the chips will suffer excessive wear during the warranty period. So, after we wrote an article about the software vendor’s claims and its new feature (designed to detect the problem), we set out to determine if the new test was accurate and if there was any imminent danger to the health of Ryzen CPUs from motherboard makers cooking the books. 

After testing three different X570 motherboards, using a variety of settings, cooling solutions and even firmware, we found that, while HWinfo does shine a light on some issues, it can output inflated values that aren’t representative of actual power misreporting. Of the three motherboards — an ASRock X570 Taichi, MSI X570 Godlike and an Gigabyte X570 Aorus Master, only the Taichi showed a huge delta between reported and actual power that resulted in increased performance. Those settings resulted in higher clock rates, voltages, and heat output. And that issue, which happened with the reviewer BIOS, largely disappeared once we installed the latest firmware. The remaining relatively small variances of 10 to 15 percent are easily explained by factors such as VRM variations, though. 

HWinfo says its new power deviation measurement, which is built into its free to download and use utility, provides a means for users to determine if their motherboard is lying to their Ryzen chips. You simply have to put your CPU under load by using any common multi-threaded test (Cinebench R20 is recommended), and then monitor the value to see its relation to 100%. The 100% value represents that the motherboard is feeding correct values to the Ryzen processor so it can modulate performance within expected tolerances, while lower values can indicate false power telemetry data. 

Be sure to read the forum thread for a more detailed description of the firm’s recommendation on how to test your own processor with the tool, but until further adjustments to the software are made, you should take the results with a grain of salt.

Testing for Motherboard Cheats

After hearing the report that some motherboards were misreporting key power telemetry data to Ryzen processors, my mind immediately went to the ASRock X570 Taichi motherboard we evaluated for our Ryzen 7 3900X and 3700X review.

At the time, the Taichi was our lone X570 motherboard in the lab, so I put it through the paces to assess whether or not the motherboard was suitable for CPU testing. I spent several days testing with the motherboard and encountered a few problems, such as drastically inaccurate power readings from software monitoring applications and lower performance with the auto-overclocking PBO presets than I recorded at ‘stock’ settings.

Encountering difficulties with motherboard firmwares is certainly not an exception during an NDA period—in fact, it’s often the rule. Both Intel and AMD platforms tend to suffer from these bugs early in the review process, and communication with either the chipmaker or the motherboard vendor usually helps iron out the initial missteps. 

However, the issues we encountered with the Taichi remained unresolved after speaking with ASRock, so we switched to a late-arriving MSI X570 Godlike motherboard a few days before the NDA expired, spinning up the tests you see in our review today. That wasn’t fun, but having to switch test hardware happens more than you might imagine.

We prefer to use software monitoring tools like AIDA64 and HWinfo for our power measurements, as they scrape the power consumption measurements directly from the sensor loop, thus removing VRM inefficiencies from the values and showing us exactly how much power the processor itself consumes. That allows us to derive in-depth power consumption and efficiency metrics. 

Software monitoring is also great because we can trigger it during our scripted tests, thus simplifying and speeding the process for our large test pools that often include 15 different processors/configurations. Unfortunately these measurements can be gamed by motherboard vendors, so due diligence is key if you rely on software-based polling, especially in light of the misreported power telemetry issue with some AM4 motherboards.

Intercepting power at the EPS12V connectors (the eight-pin CPU connectors on the motherboard) is a good method for measuring power consumption. However, it doesn’t measure the true amount of power flowing into the processor because VRM inefficiencies, typically in the range of 15% on high-end motherboards, come into play. 

Modern processors also draw power from separate minor rails on the 24-pin connector for various functions, like memory controllers, graphics, and I/O interfaces. Those measurements aren’t included in the measurements from the EPS12V connectors. The 24-pin also supplies power to the rest of the system, making it impossible to split out the amount of power dedicated to the CPU. We also don’t have software-triggerable hardware that would enable scripting the measurements into our automated test suite.

In an attempt to get the best of both the hardware- and software-logging worlds, we use either Powenetics hardware or Passmark’s In-Line PSU tester to measure power consumption at the EPS12V connectors (i.e., the two EPS12V connectors that supply the lion’s share of power to the processor). As part of our usual evaluation process of a new motherboard for CPU testing, we validate that the sensor readings obtained from the logging software, like AIDA64 or HWinfo, plausibly aligns with the power readings that we intercept at the EPS12V connectors.

This can involve a bit of fuzzy math, as VRM inefficiencies can create deltas between the power delivered to the VRMs and the power that’s fed to the processor. These deltas vary based on the components in each motherboard’s power delivery subsystem (typically ~10% to ~15%), but massive inaccuracies aren’t hard to spot, especially like those we charted out below.

The Overclocking Connection

First, we need to determine what would stand out as unsafe behavior. AMD doesn’t provide an ‘unsafe voltage’ specification, instead defining three key limits for stock operation. The list below is reproduced word-for-word from AMD’s CPU reviewer’s guide:

“Package Power Tracking (“PPT”): The PPT threshold is the allowed socket power consumption permitted across the voltage rails supplying the socket. Applications with high thread counts, and/or “heavy” threads, can encounter PPT limits that can be alleviated with a raised PPT limit.

a. Default for Socket AM4 is at least 142W on motherboards rated for 105W TDP processors

Thermal Design Current (“TDC”): The maximum current (amps) that can be delivered by a specific motherboard’s voltage regulator configuration in thermally-constrained scenarios.

a. Default for Socket AM4 is at least 95A on motherboards rated for 105W TDP processors.

Electrical Design Current (“EDC”): The maximum current (amps) that can be delivered by a specific motherboard’s voltage regulator configuration in a peak (“spike”) condition for a short period of time.

a. Default for Socket AM4 is 140A on motherboards rated for 105W TDP processors.”

— AMD CPU Reviewer’s Guide

You can override those settings either manually or with AMD’s auto-overclocking Precision Boost Overdrive. You can access this feature via either the BIOS or Ryzen Master software. Given the allegations of reliability implications due to increased voltages at stock settings, we set out to use this warranty-invalidating feature as a comparison point to the voltage and power thresholds that come as a byproduct of erroneous power telemetry.

Unfortunately, PBO typically doesn’t deliver huge performance gains if you adhere to the basic presets. Motherboard vendors define these profiles, and some users have opined that the slim auto-overclocking margins could be due to the misreported power telemetry eating into the available overclocking headroom. The answer isn’t quite that straightforward, but it does make sense that altered power consumption at stock settings could chew into the available overclocking margin. 

At stock settings, AMD’s Precision Boost 2 automatically exposes the most performance possible given the capabilities of your motherboard’s power delivery subsystem and your cooler. Premium components unlock more performance, but that doesn’t qualify as overclocking because these algorithms are constrained by the PPT, TDC and EDC settings during stock operation.

Engaging PBO overrides the stock settings for these variables. The basic “enabled (PBO on)” preset enables significantly higher PPT/TDC/EDC limits, but doesn’t change two important settings: PBO Scalar or Clock.

PBO Scalar overrides the AMD default health management settings and allows increased voltage at the maximum boost frequency and lengthens boosting duration. Changing the PBO Scalar setting unlocks the best auto-overclocking performance, so the basic preset can be lacking. 

You can also use the “PBO Advanced” profile that defines the limits of each motherboard based on the capabilities of the power delivery subsystem (as defined by the motherboard vendor). This setting exposes the highest PPT, TDC and EDC settings for the motherboard, but also doesn’t change the PBO Scalar and Clock settings. However, this setting does allow you to change the PBO Scalar and Clock settings manually, with the former usually unlocking much higher auto-overclocking potential. 

We used three profiles for our testing below. The ‘Stock’ settings consist of an explicit disablement of all PBO features, while ‘Advanced Motherboard (‘Adv. Mobo’) means the profile that sets the highest preset PPT, TDC and EDC values for each motherboard, but doesn’t change the PBO Scalar value.

Some motherboard vendors also include custom presets in the BIOS that include scalar manipulations, but those aren’t available on all motherboards. To keep things consistent, we also manually adjusted all motherboards with the same settings that we’ve marked on the charts as ‘Recommended.’ This setting includes a manually defined Scalar and AutoOC Clock setting, as listed in the table below.

Unlike in our reviews, we also kept memory settings consistent between the various configurations to eliminate that as a contributor to higher performance.

A Tale of Two “Reviewer BIOSes”

The first chart in this series plots the amount of power reported by the SMU. This reflects the amount of total power the processor believes it is consuming, compared to the amount of power we recorded at the EPS12V connectors during five consecutive runs of the multi-threaded Cinebench benchmark on the ASRock X570 Taichi motherboard.

We measured these values at stock settings with the firmware provided to reviewers (p1.21) and the included stock Ryzen cooler for this first test, as AMD specs the processor for operation with its own cooler. The measurements from HWinfo, marked as ‘Software,’ don’t align perfectly with the measurements from the Passmark In-Line PSU tester (marked as EPS12V) on the time axis due to differing polling, but it gives us a good-enough sense of the difference between the two measurements.

The first chart shows that the 3900X’s SMU reports ~60W during the Cinebench renders, while our physical measurements record peaks around 180W. The CPU averaged ~165W under load. That’s a massive ~3X delta between the amount of power coming into the EPS12V and the software-monitored values, which shows exactly why we chose not to use this board for our review. 

The second slide in the album contains measurements from the reviewer BIOS (1015) included with MSI’s X570 Godlike, and the software measurements align nearly perfectly with the observed power draw from the EPS12V connectors. We expect some losses from VRM inefficiencies, so this result is almost too good. Given that some power is fed from the 24-pin that we’re not measuring, the results are far more believable than the values we received from the Taichi motherboard, though.

We spoke with MSI about the too-perfect measurements, and the company tells us that, for its initial BIOS, it used a reference CPU VDD Full Scale value derived from an AMD-provided test kit/load generator. This is the setting at the heart of the matter: the processor uses it to determine how much power it consumes. 

The reference value resulted in the X570 Godlike over-reporting the power fed to the processor, which can actually result in slightly lower performance. Later, the company tested the parameter with a real CPU to fine tune it for the X570 Godlike’s power delivery subsystem, so changes were made in newer BIOS revisions to bring the reporting more in line with the motherboard’s capabilities. You’ll see the impact of those changes when we test the new BIOS below. The HWinfo deviation measurement, which we aren’t using for these tests, doesn’t appear to take those rational changes into account.

The third slide measures performance with the Taichi motherboard, but this time we swapped out the stock cooler for an 280mm Corsair H115i AIO watercooler. This cooler gives the processor more thermal headroom, and you’ll see the results of AMD’s innovative Precision Boost 2 and PBO algorithms in the next series of tests. 

The overarching conclusion from this first look is that ASRock’s reviewer BIOS for the X570 Taichi vastly under-reported power information to the processor, thus allowing it to draw more power than the X570 Godlike, which actually over-reported its power use. As you’ll see below, that equates to more voltage, heat, and performance from ASRock.

Given that all of the cores can run at different voltages at the same time, we plotted the maximum value recorded across the cores for each measurement to simplify the charts. We used the same approach for clock speed and use a non-zero axis for more granularity. When the processor is under load, most of the voltage and frequency values remain consistent among the cores. 

The first three charts above outline the voltage applied to the Ryzen 9 3900X with the reviewer firmware. Luckily, the voltage scale is fixed, so these measurements are accurate regardless of any adjustments to the full scale current value that’s at the heart of the issue. The first slide shows that the X570 Taichi, at stock settings, applies 1.3V to the processor while it’s under load, while the X570 Godlike feeds the chip ~1.25V. That isn’t much of a variation despite the ~20W delta in the cumulative measurements shown above, but there are obviously a lot of variations between how the respective motherboards handle power.

You’ll notice that the preset PBO settings (PBO Enabled) result in lower voltage and clocks frequencies with the Taichi. However, when we adjust the PBO Scalar setting with our ‘PBO Recommended’ alterations, voltages rise along with clock speeds. In contrast, the MSI X570 Godlike operates to our expectations, with more performance coming as a result of the overclocked settings. 

The original Taichi reviewer BIOS offers similar all-core boost speeds of around 4.125 GHz at stock settings with the H115i cooler, compared to the Godlike’s 4.05 GHz. With the air cooler, clocks are mostly similar for the Taichi between its stock and PBO Recommended settings, while using the liquid cooler exposes more headroom for a slightly higher clock.

The impact to thermals is immediately obvious, with the PBO Recommended configuration producing far more heat (up to 92C) during the test with the stock cooler than the processors’ stock settings. The ‘PBO enabled’ preset actually generates less heat on the ASRock board. It’s noteworthy that the test with stock settings peaks in the 87C range during this test, but we’ll outline lower temperatures with the Taichi motherboard in a series of tests with the latest available firmware. 

Despite the higher heat and voltages from the PBO Recommended settings, the Taichi motherboard delivers less performance during the Cinebench run at stock settings. Now, PBO performance does vary based on the thermal headroom available to the chip, but it runs counter to our expectations to receive lower performance with overclocked settings. 

For the Taichi, topping the 3900X with the Corsair H115i rectifies the disparity and provides the slimmest of performance gains with the tuned settings, but be aware that we’re using a non-zero axis for the chart due to the remarkably slim deltas. There’s an average uptick of 19 points, or a mere 0.24%. That surely isn’t worth the increased voltage and thermals. 

In this series of charts, we plotted the respective stock measurements with the reviewer BIOSes for both the MSI X570 Godlike and the ASRock X570. While each vendor obviously tunes its respective motherboard using many parameters, it’s clear that the Taichi enjoys a performance benefit due to the misreported power telemetry. As a result, voltages, clocks, thermals and performance are all higher for the Taichi motherboard. Whether this is the result of an honest mistake or just overzealous tuning for the sake of a performance edge is debatable, but the misreporting appears to have been corrected in later BIOS revisions, as we’ll see below.

Here’s a series of charts for the Taichi with the latest firmware available on its public site. Again, we used both the stock cooler and an H115i AIO for the two configurations.

The deltas between the power consumption reported by the SMU and the EPS12V connectors has been reduced tremendously. The chip still consumes up to 160W under load compared to the reported value of 142W, but we can chalk that up to the expected VRM losses from this particular motherboard.

According to the HWinfo utility, the Taichi motherboard is still feeding incorrect power telemetry data to the SMU—the utility lists the deviation at ~7%. However, our measurements align more with our expectations of VRM losses, so the HWinfo data could be a misreport. (It’s still unclear exactly how HWinfo determines deviation.)

The reduced Cinebench performance with the PBO settings when paired with the stock cooler also remain (the two PBO results overlap one another in the chart), while topping the chip with the H115i produces similar slight wins for the PBO Recommended configuration. The PBO Enabled configuration remains slower in all cases. 

It’s important to note that even with the adjusted power telemetry data, the power consumption we measured at the EPS12V connector remains in the low 160W range, which is fine given the expected VRM losses. 

Gigabyte X570 Aorus Master

We have one other X570 motherboard in the lab, the Gigabyte X570 Aorus Master, so we gave it a spin through the same series of tests to gauge how it lands on the accuracy scale with the latest BIOS. We also wanted to see if it exhibits the same performance trends with the various PBO settings. The Aorus Master also tops out near 142W of power consumed, which aligns nearly perfectly with the software measurements. Given that we don’t expect perfect efficiency figures from the power delivery subsystem, this implies the power reporting isn’t optimized on the Aorus Master, creating a situation much like what we saw with the Godlike X570 – over-reporting that can actually lead to slightly reduced performance. We’ve pinged Gigabyte on the matter.

However, even without an obvious misreporting (probably over-reporting) of the power telemetry data, we still encounter the same condition of reduced performance when activating the PBO Enabled preset. It is noteworthy that the Aorus Master responds well to manipulating the Scalar variable and delivers more performance. We’ve also outlined the issues with the standard PBO profile to Gigabyte. The company has replicated the condition and is investigating further. 

The “Control”: MSI X570 Godlike

The MSI X570 Godlike is the lone motherboard we have in the lab that allows us to adjust the parameter that is responsible for altering telemetry data: CPU VDD Full Scale Current. This setting appears to default to 280A on the Godlike with the latest publicly available non-beta BIOS (1.8). Remember, the company says its value is accurate given fine tuning for its power delivery subsystem, so we tested by adjusting the 300A (listed as VDD Adjusted in the charts) value recommended by The Stilt in his forum post. 

The SMU-reported and EPS12V measurements align closely in the first chart, which outlines the results of our 300A adjustment. The second chart, measured at stock settings with no VDD adjustment, clearly shows a delta between our recorded values and the reported power consumption, which now pegs at roughly 160W as opposed to roughly 140W with the adjusted VDD value. The behavior with the default ‘Auto’ setting is more in line with an expected result than the adjusted 300A values. In contrast, the adjusted 300A value shows almost no losses due to VRM inefficiency, which would be nice if true. But it isn’t. 

HWinfo hasn’t shared information with us to clarify how it measures deviation, so the tool is a bit of a black box. The HWinfo tool reports a variance of 12% with the auto VDD settings above, implying that the tool makes its decisions based on reference full scale current values, and not those optimized by vendors.

In the third slide, the adjusted 300A VDD setting results in lower heat, and the successive charts cover reduced voltages, frequencies, and performance associated with the adjustment. We’re more inclined to believe that, based on the physical measurements we’ve taken and the normal amount of expected VRM efficiency losses, MSI’s auto VDD settings are closer to reality than suggested by the HWinfo deviation metrics. 

We went ahead and plotted our now-standard battery of tests with the new Godlike firmware, leaving the VDD setting to Auto. The motherboard exhibits many of the same tendencies we see with the other boards with AMD’s PBO presets. However, it does fare considerably better than other boards with the PBO enabled profile, merely matching the stock settings in most metrics.

Final Thoughts (For Now)

Modern chips rely upon accurate telemetry data, and HWinfo’s new deviation feature helps shine a light on how some motherboard vendors have found a way to misreport power telemetry. Unfortunately, the inner workings of the tool aren’t entirely clear, and HWinfo doesn’t specify how it assigns the deviation value. From our testing, it appears the tool doesn’t take what we would consider legitimate adjustments to the full scale current into account, which causes inflated deviation readings.

According to our sources, AMD has load generation tools that help motherboard vendors define reference values for power telemetry reporting, but those are more general settings that assume a ~5% overhead for the tolerance of VRM components. In practice, the tolerance can be up to 10%. As a result, motherboard vendors can fine tune the telemetry reporting for their unique power delivery systems, thus ensuring the correct amount of power delivery to the chip. The HWinfo deviation metric doesn’t appear to take into account what we consider rational adjustments to power telemetry reporting. It appears, at least on the surface, that HWinfo’s tool measures from some understanding of the reference values, but its method is unclear. The deviation metric is still a work in progress, but we noticed quite a bit of variation with some measurements, so your mileage may vary.

It’s possible that intentionally manipulated power telemetry reporting can expose an extra performance edge and go undetected by both reviewers and common users alike, leading them to post erroneous power consumption results. We saw a pretty egregious example of incorrect reporting in our testing with a BIOS provided to reviewers that is also available to the public, so it remains important for reviewers to use physical power measurements to validate the results they get from software utilities. In fairness, we’d expect a more subtle change than what we observed with the Taichi reviewer BIOS if the company was out to trick reviewers, so it’s debatable whether or not the changes to reporting were intentional. 

AMD’s auto-overclocking Precision Boost Overdrive (PBO) feature often causes performance losses in some workloads if you use the vendor-defined basic preset values, but the severity varies from motherboard to motherboard. We set out to use the PBO values as a reference for what unsafe settings look like (it does invalidate your warranty), but in many cases found the basic PBO presets resulted in lower performance. They need some work and currently aren’t a good measuring stick. Even on motherboards that correctly report power, the basic PBO presets didn’t provide any tangible benefit.

In contrast, manual changes (which we covered above) to the Scalar setting provide performance gains, and those are the better reference point for unsafe settings. The Taichi reviewer BIOS suffered from the worst misreporting, but it didn’t result in power settings that match or exceed the settings imposed by our PBO profile with higher Scalar settings. 

Misreported data can cause the CPU to run a bit harder (and hotter) during normal operation, but you shouldn’t be too worried about the amount of power applied to your chip if your board is misreporting the telemetry data, though it does result in higher power consumption, voltage, heat, and clock speeds.

It’s best to leave the assessment of the impact on Ryzen chip longevity to AMD or other semiconductor professionals that work in the reliability field, as a wide array of factors impact those metrics. Reliability metrics are based on modeling and information that we’ll never see, and a complex matrix of factors also work into the equation. Some factors increase the rate of wear and trigger electromigration (the process of electrons slipping through the electrical pathways) faster, such as higher current and thermal density, but the impact of the two on one another doesn’t scale linearly, and it varies depending on how long the processor stays in a heightened state. 

A chip will age, and transistors will eventually wear out, even under optimal operating conditions. Still, while the increased power consumption we see due to the erroneous telemetry data could have an impact with heavily-used processors and reduce longevity, it boils down to how much the increased power and heat output speed the aging process.

It is plausible that there could be at least some impact to chip longevity due to manipulated power telemetry, but AMD’s initial assessment is that it won’t have a meaningful impact during the warranty period. We didn’t find any glaring problems that would be cause for immediate alarm, and AMD’s internal mechanisms work well to protect users from settings that would cause catastrophic failures. The company’s engineering teams have also obviously studied the matter to some extent and haven’t yet seen any adjustments that could result in significant degradation during the warranty period. 

AMD’s statement seemingly confirms that it wasn’t aware of the manipulations. It will be interesting to see if motherboard makers end the practice, or if AMD finds that because the adjustments don’t impact longevity in a meaningful way, the practice can continue. We’ll keep an eye on newer BIOS releases as they trickle out for any significant changes to power telemetry reporting.

This is AMD’s secret weapon against Intel and it could change personal computing forever

AMD’s big announcement this week was the launch of the new Ryzen 3000 XT CPUs – a stopgap between the current generation and the Ryzen 4000 series expected to launch later this year.

Tucked away at the foot of the press release is mention of something that might become a fundamental part of AMD’s toolset, sooner rather than later, as it seeks to compete more sustainably with arch nemesis Intel.

TechRadar Pro covered the launch of AMD StoreMi back in April, but version 2.0 has now been announced and will include “a new caching-based acceleration algorithm that enhances data integrity and prioritizes most-used data, speeding up boot times by up to 31% and decreasing game load times by up to 13% vs an HDD only”.

The tests were carried out using a hard drive of unknown capacity (or spinning speed) with a PCIe Gen 4 NVMe SSD, again of unknown origin. In reality, you should be able to mix and match any sort of storage, regardless of whether it is magnetic or not.

No RAMDisk yet

Sadly, this version of StoreMI doesn’t (yet) include the Radeon RAMDisk, which would achieve the Holy Grail of allowing RAM and storage to mix, similar to what Intel has done with Optane.

We still believe this facility will arrive at some point, but why? Well, AMD has been working very closely with Microsoft on the brand new Xbox Series X gaming console and one of the biggest leaps in performance has come from moving to a new storage system that combines software (DirectStorage) with customized hardware. 

It turns out that DirectStorage is something Microsoft plans to bring to Windows as well, as it can reduce the processor overhead from I/O operations from multiple cores to just a fraction of a single core. While it will not be tied to any custom hardware, AMD is likely to benefit due to the modular nature of its CPU architecture.

So where does that leave us? In a not-so-distant future, one can imagine an AMD system that pools together all the available memory/storage resources, managed intelligently in a way that’s far more efficient than what Windows 10 achieves with Virtual Memory, the part of your storage component the operating system uses as RAM.

All this is pure speculation, of course, but the fact AMD has dedicated resources to StoreMi makes us optimistic about its future.

Corsair A500 Review: Premium Price, Curious Flaws

Corsair – the company who has brought us seemingly endless options of AIO liquid cooler models for the masses, has recently introduced their first large air cooling option, the A500. The dual-fan, monolithic heatpipe cooler is also devoid of another Corsair staple: RGB/aRGB lighting, meaning the new Corsair A500 is either a welcome change or a deal breaker, depending on which camp you defend.

Features

Corsair ships the A500 with a quality set of mounting hardware, including nicely plated mounting studs and securing nuts as well as a very robust, laser-cut steel backplate. Cable ties are a welcome addition, as is the 2-way PWM splitter and Phillips screwdriver included in the box.

And while the A500 does ship with pre-applied thermal paste, an additional syringe of XTM50 thermal compound is provided for future re-installs of the cooler. Corsair covers the A500 with a 5-year warranty.

The most interesting feature of the A500 is its pair of non-RGB ML120 fans nested within a set of molded frames which ride on friction rails on either side of the cooler tower. Fans are pre-installed out of the box in a standard push+pull configuration, eliminating any confusion around fan and airflow direction.

The A500 features a quartet of plated heatpipes to dissipate thermal load from the base of the cooler throughout the thermal tower. The mounting plate comes permanently affixed to the base and utilizes a pair of tension-screws to secure to the cooler’s included cross-bar frames. 

Access to the tension screws is gained by removing the attractive, brushed-aluminum top plate bearing the Corsair logo by popping it free of cooler tower. The center of the tower fin stack has a central cutaway to allow the Phillips screwdriver direct access through this channel to the tension screws of the base.

Corsair ships the A500 with a pre-applied patch of thermal compound in grid layout. The four direct-contact heatpipes of the cooler are integrated into the plated mounting base. The central cutaway is also visible here, showing how the tension screws are accessed via this channel.

We clean every cooler base of pre-installed thermal compound using alcohol wipes and cloth and later use Arctic MX-4 for all of our tests. During this process, we noticed a small irregularity of the direct-contact heatpipes along the base of the A500. One of the heatpipes (second from the left) is slightly raised when compared with the others in our sample, which is easily seen with a steel straightedge and some backlighting.

The thermal compound contact patch shows the representation of this impact when the cooler is installed, leaving one of the direct contact heatpipes elevated just slightly to isolate it from CPU IHS contact. This also means that this single heatpipe can only ‘work’ by absorbing heat from remainder of the cooler base and the adjacent (raised) heatpipe, instead of directly removing it from the processor.

Making use of the removable top plate, installing the A500 is quite a simple task and one which does not require the cooler’s fans to be removed, which is typically a requirement with other large air coolers. Larger memory DIMMs can be accounted for with a bit of vertical adjustment of the fans, although this may only be required with the tallest of memory sticks.

We tested the Corsair A500 against peers of similar price and relative size, specifically the be quiet! Dark Rock Pro 4, DeepCool Gamer Storm Assassin III and Noctua NH-U12A. All have been evaluated on our i7-5930k test bench running at 4.20 Ghz @ 1.20v paired with 16GB of DDR-2400 Crucial Ballistix on our MSI X99S XPower AC motherboard.

Note that while our cooling platform is old at this point, the CPU’s 140W TDP, combined with a healthy overclock, still gives today’s coolers a tough workout. That said, we are planning to update our cooling testbed once Intel’s latest Comet Lake-S CPUs and accompanying motherboards arrive.

Both the Corsair A500 and Noctua NH-U12A utilize a pair of 120mm fans, which does account for slightly higher thermal load temperatures than the be quiet! Dark Rock Pro 4 utilizing a 135mm + 120mm fan and the dual 140mm DeepCool Gamer Storm Assassin III.

Use of smaller 120mm fans by the A500 and the NH-U12A gives us higher measured fan RPM while larger fans will usually spin more slowly. Since the Dark Rock Pro 4 features two fans of different diameters, we’ve listed them separately as we did from our original coverage of this cooler.

The 2400+ RPM fans on the Corsair A500 kick up a lot of turbulence, leading to elevated decibel levels, but considering we’ve seen similar results on the Corsair H100i Pro lineup, this comes as no surprise.

Acoustic efficiency evaluates how coolers in our tests perform when we combine thermal performance and noise level, essentially building a graph of how well a cooler does work and how acoustically efficient it is during that process.

With the Corsair A500 priced right at $100, it struggles with some of its peers due to noise level and a few degrees of thermal separation. Both the Corsair A500 and the Noctual NH-U12A are priced around 10% higher than the be quiet! Dark Rock Pro 4 and the DeepCool Gamer Storm Assassin III, also creating more separation in our performance value chart.

Thermal imaging from our FLIR ONE Pro camera shows some notable differences of heat soak at 50% fan speed seen in the center cooler cutout vent as well as an indication of additional thermal buildup at the exhaust fan (left of the center logo). Overall, the mass of the cooling fin stack shows equalization in both photos, providing indication that the cooler is effectively distributing thermal loads evenly through the A500.

Corsair have positioned themselves in a way which allows its customers another choice that retains the triple-sail logo while alleviating the fears of liquid cooling and maintaining brand loyalty. The A500 isn’t the highest-performing cooler for big-air money, so unless you are a Corsair loyalist, it’s a difficult option to recommend considering other options available.

We also have some concerns around the irregularities in the milling of the direct-contact heatpipes as we know we aren’t the only ones to have encountered this problem. It makes us wonder if there would be marked performance improvement if this build anomaly were corrected, and we’re hopeful that Corsair will correct the issue in future retail updates.

New Radeon Pro VII Wows on Price and Double Precision

Earlier this morning, AMD revealed its new Radeon Pro VII graphics card, its latest workstation-class competitor to Nvidia’s Quadro line of GPUs. As its name suggests, this is a professional level update on last year’s Radeon VII GPU, incorporating the same Vega 20 GPU but almost doubling the base Radeon VII’s double precision performance.

When AMD announced the Radeon VII at its keynote last year, it was the world’s first 7nm gaming graphics card, using Vega 20 to compete with Nvidia’s RTX 2080. Now, AMD is applying Vega 20 to its workstation cards, producing a pro-level successor to the Radeon VII that takes almost the same specs and ups the double precision performance to 6.5 TFLOPs, offering support for mixed graphics/compute tasks that almost matches Nvidia’s much more expensive Quadro GV100.

While this comes at the cost of a slightly lower boost clock and slightly lessened single precision performance, making it not quite as powerful of a gaming machine as its non pro predecessor, this will make it a boon to 3D modelers and financial analysts, who frequently run mixed graphics/compute software.

The AMD Infinity Fabric external link is also a new addition to the Radeon Pro VII, migrating over from the Radeon Instinct MI50/MI60. The purpose here is to make multi-GPU performance more efficient, enabling a total of 168GB/sex bandwidth between two connected GPUs.

All these features also make it Vega 10 Radeon Pro WX 9100, which runs on Vega 10, as well as the Radeon Pro W5700, assuming memory clock speed isn’t a priority.

Of course, the key feature for the Radeon Pro VII compared to its Nvidia Quadro counterparts is price. Launching for $1,800, it’s primed to severely undercut Nvidia’s double precision king, the Quadro GV100, while also remaining competitive with the Quadro RTX 5000 and 6000. While its feature set is unique- the GV100 is only really a fair comparison when it comes to FP64 support- this makes it a strong buy for value, assuming it fits your needs.

Patriot P300 M.2 NVMe SSD Review: Low Price, No Frills

Patriot’s been on fire lately, releasing some appealing SSDs. The company’s Viper VP4100 is one of the fastest money can buy, and the Viper VPR100 offers solid PCIe Gen 3 performance with some tasteful RGB illumination. But, while these SSDs are great picks for enthusiasts, they’re too expensive for those searching for NVMe flash storage on a tight budget. Enter Patriot’s P300.

Significantly outpacing SATA competitors, the P300 is the company’s latest M.2 NVMe SSD, offering up multi-GB performance figures thanks to a Phison E13T DRAMless NVMe controller and Kioxia’s latest 96L TLC flash. But while the price is appealing (starting at just $35 for the 128GB model), the P300 falls behind competition in terms of overall value. In short, you won’t find it on our list of best SSDs, though that doesn’t mean it’s not worth considering, especially if you find it on sale.

Patriot is offering the P300 in 256GB, 512GB, 1TB, and 2TB capacities, although the smallest 128GB capacity is not yet available. Patriot prices the P300 at around $0.12-$0.20 cents per GB, depending on the capacity, with our 1TB sample being one of the best values at $120 shipped.

The company rates these SSDs to hit sequential performance figures of 2.1/1.7GB/s read/write and upwards of 290,000/260,000 IOPS read/write in random performance. The smallest capacities take a slight performance hit, however. As an entry-level NVMe SSD, the endurance rating on the P300 is lower than mainstream competitors, but is still more than enough for most users. Patriot backs the P300 by a three-year warranty, too.

A Closer Look

Patriot’s P300 comes in an M.2 2280 form factor. Our 1TB sample is single-sided, meaning all components are on just on side of the PCB to ensure compatibility with mobile devices that have thin size constraints. If you’re installing the drive in a desktop and care about aesthetics though, you may want to look elsewhere. The P300 sports a distracting white sticker over an ugly blue PCB on our U.S. version. Those not in the U.S. will receive one with a black PCB and a Silicon Motion SM2263XT NVMe controller. 

Powering our U.S. version is Phison’s PS5013-E13T PCIe 3.0 x4 NVMe 1.3-compliant 4-channel SSD controller. This 28nm controller utilizes a single-core Cortex R5 CPU that operates at 667MHz, plus a CoXProcessor to aid with NAND management tasks.

The P300 was built with a DRAMless architecture to reduce manufacturing costs. Without the DRAM on the device, the SSD’s potential performance compared to DRAM-based SSDs is hindered. Phison’s E13T mitigates this a bit with Host Memory Buffer (HMB) support, which lets the controller utilize the host system’s memory as a DRAM cache for accelerating the flash translation layer (FTL) interaction, offering improved performance than without this feature.

The controller interfaces with Kioxia’s (Formerly Toshiba Memory) BiCS4 96L TLC NAND flash. At 1TB, our sample features four NAND packages that each utilize four 512Gb dies. They operate at 1.2V and interface with the controller at a speed of 800MT/s.

If the controller gets too hot, there is thermal throttle support to prevent data damage. As well, it boasts end-to-end data protection and Phison’s fourth-gen LDPC and RAID ECC to ensure data integrity. Along with S.M.A.R.T. data monitoring and TRIM, the controller also supports secure erase capability to wipe it clean and has support for APST, ASPM, and L1.2 power saving modes.

Comparison Products

Up for comparison, we threw in a handful of entry-level NVMe competitors, including the WD Blue SN550 1TB, Intel SSD 665p 1TB, and Crucial P1 1TB. We added in Team Group’s MP33 1TB, which is close to what the non-US version of the P300 would perform like with its SM2263XT controller and 96L NAND flash. Additionally, we threw in Adata’s XPG SX8200 Pro and Corsair’s Force MP600, two top-ranking NVMe SSDs as well as Crucial’s MX500 and WD’s Black HDD, SATA based competitors, for good measure.

Game Scene Loading – Final Fantasy XIV

The Final Fantasy XIV StormBlood and Stormbringer are two free real-world game benchmarks that easily and accurately compare game load times without the inaccuracy of using a stopwatch.

Patriot’s P300 lags the competition when it comes to serving up game data. With total load times that exceed the SATA-based Crucial MX500, it falls into eighth place. That doesn’t exactly make the drive slow though. It still offers significantly faster performance than an HDD.

Transfer Rates – DiskBench

We use the DiskBench storage benchmarking tool to test file transfer performance with our own custom blocks of data. Our 50GB data set includes 31,227 files of various types, like pictures, PDFs, and videos. Our 100GB includes 22,579 files with 50GB of them being large movies. We copy the data sets to new folders and then follow-up with a reading test of a newly written 6.5GB zip file, 8GB test file, and a 15GB movie file.

When reading large files from Patriot’s P300, the performance was snappy and much faster than the Crucial MX500, closer to that of the WD Blue SN500. But, while large file reads were quick, large folder copy tests show sluggish performance in comparison to the rest of the NVMe-based competitors. Still, it was about twice as fast as the MX500 at copying our large test folders and 4 times faster than the WD Black HDD.

Trace Testing – PCMark 10 Storage Tests

PCMark 10 is a trace-based benchmark that uses a wide-ranging set of real-world traces from popular applications and common tasks to measure the performance of storage devices. The quick benchmark is more relatable to those who use their PCs lightly, while the full benchmark relates more to power users. If you are using the device as a secondary drive, the data test will be of most relevance.

Like Team Group’s DRAMless MP33, the P300 ranks slower than any of the DRAM-based SSDs. Both perform relatively similar overall, but SMI’s SM2263XT is a bit more responsive here. Again, the P300 maintains a lead over the MX500, meaning that when dealing with application data, the P300 will offer a snappier user experience over SATA competitors.

Trace Testing – SPECworkstation 3

Like PCMark 10, SPECworkstation 3 is a trace-based benchmark, but it is designed to push the system harder by measuring workstation performance in professional applications.

In contrast to its performance in PCMark 10, Patriot’s P300 shows a bit stronger performance than the Team Group MP33 here. Completing the test about 14 minutes quicker, it showed stronger read and write performance when pressed with heavier loads. Both the P1 and 665p, QLC-based competitors, deliver faster performances, however, with the additional DRAM buffers onboard their PCBs.

Synthetics – ATTO

ATTO is a simple and free application that SSD vendors commonly use to assign sequential performance specifications to their products. It also gives us insight into how the device handles different file sizes.

In ATTO, we tested Patriot’s P300 at a QD of 1, representing most day-to-day file access at various block sizes. The device’s read performance at small file sizes leaves it clearly lagging behind the competition. Patriot’s P300 display’s responsive sequential write performance, however. These differences may explain why PCMark 10 favored the Team Group MP33 while SPEC workstation 3 favored the Patriot P300.

Synthetic Testing – iometer

iometer is an advanced and highly configurable storage benchmarking tool that vendors often use to measure the performance of their devices.

We measured Patriot’s P300 to hit peak throughput speeds about 2.6/1.8 GBps read/write. But it takes multiple transfers to attain that read speed. Random performance is weak compared to competitors as well. When randomly reading from it at a QD1, the P300 lags behind the MX500. Compared to a plain old HDD, however, the P300 offers a significantly faster performance any way you look at it. 

Sustained Write Performance, Cache Recovery, and Temperature

Official write specifications are only part of the performance picture. Most SSD makers implement a write cache, which is a fast area of (usually) pseudo-SLC programmed flash that absorbs incoming data. Sustained write speeds can suffer tremendously once the workload spills outside of the cache and into the “native” TLC or QLC flash. We use iometer to hammer the SSD with sequential writes for 15 minutes to measure both the size of the write cache and performance after the cache is saturated. We also monitor cache recovery via multiple idle rounds. 

When possible, we also log the temperature of the drive via the S.M.A.R.T. data to see when (or if) thermal throttling kicks in and how it impacts performance. Bear in mind that results will vary based on the workload and ambient air temperature.

Peaking at about 1.6 GBps, the P300 wrote a little over 24GB of data before the write speed degraded to an average speed of 430 MBps from then on out. Thanks to its relatively small SLC write cache, the P300 is capable of much more consistent write performance over the Team Group MP33 featuring the SMI SM2263XT controller. And, given just 30 seconds of idle time after writing is complete, the 24GB write cache is recovered and ready for more.

When moving files around without airflow in a 25C environment, the controller reported temps in the mid-60s, peaking at 68C after moving 400GB of data to the drive. Thus, Patriot’s P300 usually won’t need any sort of heatsink or airflow to aid in cooling it in most use cases.

Power Consumption

We use the Quarch HD Programmable Power Module to gain a deeper understanding of power characteristics. Idle power consumption is a very important aspect to consider, especially if you’re looking for a new drive for your laptop. Some SSDs can consume watts of power at idle while better-suited ones sip just milliwatts. Average workload power consumption and max consumption are two other aspects of power consumption, but performance-per-watt is more important. A drive might consume more power during any given workload, but accomplishing a task faster allows the drive to drop into an idle state faster, which ultimately saves power.

Overall, Patriot’s P300 is fairly efficient, nearly matching the SX8200 in performance per watt. It consumes the least amount of power out of all other SSDs in our test pool, sipping just over 2.2W and peaking at 3.3W under concurrent small and large block sequential reading/writing.

The Patriot drive also supports APST, ASPM, and L1.2 power saving modes. On our desktop testbench, the SSD couldn’t hit its lowest idle state, but fell to a respectable 40mW when ASPM was enabled. When disabled or when active, P300 consumes about 10x the amount, lower than the rest of the pool once again.

Due at least in part to the global shutdown caused by the coronavirus, SSD prices have gone up a bit since a few months ago. This has made some of the cooler and faster-performing NVMe SSDs jump back up in cost per GB, leading to some would-be purchasers who still want a bump up in speed compared to SATA to consider buying cheaper alternatives. And while speed-craving enthusiasts might not bite, entry-level NVMe SSDs are typically a great choice when the price is similar to their SATA competitors.

In day to day use, while the performance difference is usually quite small, NVMe SSDs usually offer an ever-so-slightly more-responsive system over their SATA counterparts. This makes them the best choice for installing your operating system. Similarly, they may very well complement to your main M.2 drive if you are just looking for a larger capacity storage device to go along with a faster drive. Offering much-improved performance over a SATA SSD in many situations, Patriot’s P300 looks to be a good fit for such situations. Just know that the drive has limitations, primarily due to its lack of DRAM.

Overall, Patriot’s P300 is versatile and efficient in day-to-day use. Coming in a thin single-sided form factor, it’s ready for almost any mobile device and will sip power compared to most SSDs, let alone a hard drive. This also leads to less heat output . And, without any cables, it won’t add clutter to your desktop build like a 2.5-inch SATA SSD will.

In our testing, Patriot’s P300 displays strong large block sequential read performance, but lags in small file reading and requires higher queue depths (multiple transfers at once) to hit the same IOPS as competitors. This leaves it lagging behind the Team Group MP33 and DRAM-based NVMe competitors in light, low QD consumer use cases like we saw in PCMark 10. However, with a more consistent write cache design, when taxed with writes, it prevails ahead of the SMI solution.

It’s priced fairly low, but lacks the value adds other brands give you, such as a software suite to manage and monitor the device. It isn’t as fast as WD’s Blue SN550 or Intel’s and Crucial’s QLC drives in many real-world applications either. Some alternatives come with longer 5-year warranties, and WDs Blue SN550 features a higher endurance rating, too.

If you are looking for a new game drive on a tight budget, while Patriot’s P300 is significantly better than an HDD, it isn’t our first recommendation. The average gamer is probably better off with a SATA SSD at this price. And if you want to go NVMe, it’s worth paying $20 or so extra on a model with DRAM for improved performance and responsiveness.

This overclocked GeForce RTX 2070 Super is on sale for $460 right now

Gigabyte’s factory overclocked GeForce RTX 2070 Super Windforce OC 3X graphic card is on sale again, with this latest discount matching the lowest price we’ve ever seen for one of these cards. It’s available on eBay (via Newegg) for $459.99.

That’s $40 below its regular list price. Not too shabby for what we consider the best graphics card for anyone who wants to do a bit of 4K gaming for a reasonable amount of money (see our review). You won’t always hit 60fps at ultra settings (depending on the game), but it just doesn’t make sense to splurge on something like a GeForce RTX 2080 Ti with Ampere coming out later this year.

You could argue that now’s a bad time to buy a graphics card in general, and to some extent, that’s true (AMD will be launching a new GPU this year as well). However, if you’re in need of an upgrade right now and can catch one on sale, as is the case here, there’s nothing wrong with pulling the trigger. Playing actual PC games is infinitely more fun than playing the waiting game, after all.

The Windforce card that’s on sale here comes with a slightly faster boost clock compared to stock models (1,785MHz vs 1,770MHz). Just as importantly, it boasts a three-fan cooling solution. Your mileage will vary, but it’s reasonable to expect a bit of additional overclocking headroom.

Even without it, this is a fast card. It’s also the most affordable 2070 Super around. Stock has dwindled, with the next cheapest option on Newegg being an MSI card priced at $509.99, followed by marketplace sellers asking north of $700 for other 2070 Super models. Take a hard pass on those.

GeiL Unveils 64 GB DDR4-3200 SO-DIMM Kit, 2 x 32 GB

Memory manufacturer Golden Emporer International Limited, or known generally as GeiL, has announced its new memory line with support for Intel 10th Gen and AMD Ryzen 4000 series processors. Casually calling its new 64 GB product simply ‘SO-DIMM’, it will be available in a 2 x 32 GB DDR4-3200 kit.

With mobile processor architecture improving at a steady rate, memory capacity limits have increased designed to give gamers and content creators more robust memory capabilities. Designed to support Intel’s 10th generation and AMD’s Ryzen 4000 mobile series, the GeiL SO-DIMM offers users to upgrade its notebooks with 64 GB of memory. Not only DDR4-3200, but the GeiL SO-DIMM series also includes DDR4-2666 and DDR4-2933 variants.

With its black and simplistic design, the DDR4-3200 64 GB (2 x 32) kit has a CAS latencies of 22-22-22-52. GeiL states every kit is tested with its DYNA 4 SLT technology but doesn’t state publically what that entails. The DDR4-2666 kit has latencies of 19-19-19-43, while the DDR4-2933 has latencies of 21-21-21-48 (these sub-timings are similar to other vendors). Every 2 x 32 GB kit has an operating voltage of 1.2 V and is also backed by GeiL’s limited lifetime warranty.

While the product naming scheming is somewhat unimaginative, the SO-DIMM range of dual-channel 64 GB kits for mobile platforms is GeiL’s first 64 GB SO-DIMM kit. Geil hasn’t announced when the SO-DIMM 64 kits will be available, nor has it revealed pricing at this time.

The ASRock Z490 Aqua: Thunderbolt 3, PCIe 4.0 Ready, Water Cooled

Following a resurgence of motherboards decked out with custom monoblocks, ASRock has announced details on its Z490 Aqua. It includes a familiar aluminium plated copper custom monoblock, with just 999 units available for purchase. It includes two Thunderbolt 3 Type-C ports, three PCIe 3.0 x4 M.2 slots, an Aquantia 10 G and Realtek 2.5 G Ethernet controller pairing, including hardline water cooling fittings within the accessories bundle.

During Computex 19, ASRock unveiled its X570 Aqua motherboard which we since reviewed, which is currently one of the most stunning desktop models in existence. The new ASRock Z490 Aqua builds upon the success of its AMD based model, with a full cover block monoblock which cools the CPU and the 16-phase power delivery. Following the same design as the X570 model, the ASRock Z490 Aqua adds an OLED display which can display with CPU voltages, temperatures, system and POST statuses, with an abundance of premium controllers and features onboard.

Included in the long list of features are three full-length PCIe 3.0 slots which run at x16, x8/x8, and x8/x8/+4, with two PCIe 3.0 x1 slots. It uses a 16-phase power delivery which is kept cool by the large monoblock, and benefits from 90 A power stages with a maximum Vcore current of up to 1260A. Providing power to the CPU is a pair of 8-pin 12V ATX CPU power inputs, with an Intersil ISL69269 PWM controller operating in a 14+2 configuration. There is support for DDR4-4700, with a total capacity of up to 128 GB across four memory slots. It does have support for ECC DDR4 UDIMMs, but these will operate in non-ECC mode.

New for the Z490 Aqua is a pack of hardline water cooling fittings which come in the board’s accessories bundle. This includes six knurled silver straight fittings, four 90° right-angled fittings, with a filling and drawing value, and two stop-cap fittings with the Aqua logo, all packed into a nice presentation box.

The rear panel of the ASRock Z490 Aqua is stacked with two Thunderbolt 3 Type-C and two accompanying Mini DisplayPort input ports. Also present are three USB 3.2 G2 10 Gbps Type-A, one USB 3.2 G2 10 Gbps Type-C, and four USB 3.2 G1 Type-A ports. For users intending to leverage Intel’s integrated graphics, there is a single HDMI video input. For users demanding premium networking, the Z490 Aqua is using an Aquantia AQC107 10 G and Realtek RTL8125BG 2.5 Ethernet controller pairing, as well as an Intel Wi-Fi 6 wireless interface which provides support for BT 5.0 devices. There are five 3.5 mm color-coded audio jacks and S/PDIF optical output which are powered by a Realtek ALC1220 HD audio codec, which is assisted by an ESS Sabre 9128 DAC which controls the front panel audio. Finishing off the rear panel is a clear CMOS switch and a BIOS Flashback button.

The ASRock Z490 Aqua is expected to weigh a considerable amount when all the aluminium, the backplate, and custom monoblock is factored in; the ASRock X570 Aqua for reference weighed a whopping 2.5 kg (~5.5 lbs). Only 999 units will be available for purchase which will add an element of exclusivity, which we expect to add to the price tag. ASRock has also enabled support with redrivers and a PCIe 4.0 clockgen designed for Intel’s Rocket Lake when it is ready to launch. While this doesn’t benefit the Intel’s Comet Lake processors, 

ASRock hasn’t unveiled pricing at this time, but we expect the Z490 Aqua to be available either at the launch of Intel’s 10th Generation Comet Lake processors or shortly after.

Intel 10th Gen Comet Lake-S Desktop CPU Final Specifications & Prices Leak Out – Core i9-10900K 10 Core For $488 US, Core i7-10700K 8 Core For $374 US, Core i5 6 Core Starting at $150 US

The final specifications & prices of Intel’s upcoming 10th Generation Comet Lake-S Desktop CPU family have leaked out by Videocardz. The information confirms the details that we already knew but the most important thing covered in this leak is the prices that show that Intel has definitely given its Core lineup a major overhaul in terms of price/positioning to tackle AMD’s Ryzen 3000 processors.

Intel’s 10th Generation Desktop CPU Family Specifications & Prices Leak Out – Core i9-10900K With 10 Cores For $488 US, i7-10700K With 8 Cores For $374 US

Intel’s 10th Gen Comet Lake-S Desktop CPUs and the accompanying Z490 platform are expected to be unveiled on 30th April and will be introduced on retail shelves on the 20th of May.

Update: HD-Tecnologia has also leaked the latest slides and shown us the pictures of Intel’s latest box packaging for its 10th Generation Desktop CPUs. While the dodecahedron packaging for 9th Gen Core i9 desktop CPUs looked great, I think the more simplistic box-shaped package with a nice mirror cut design looks even better.

The Intel Comet Lake-S or the 10th Generation Core Family is expected to be the last CPU lineup to reuse the 14nm Skylake architecture. The Skylake architecture has been with us since 2015 and Intel has yet to replace it for desktop consumers. The architecture has seen several optimizations and key refinements that have led to an increase from 4 cores and 8 threads to 10 cores and 20 threads. The same 14nm process has also been improved to the point that the flagship CPU speeds have seen a massive jump from 4.20 GHz boosts to 5.30 GHz boosts.

There are three SKUs in the unlocked lineup which is one less than what we were told. There’s the flagship Core i9-10900K which is followed by the Core i7-10700K and the Core i5-10600K. The Core i3 variant is missing but it’s such a crucial SKU that would be targetting a very competitive market and it would be a bad decision for Intel to not launch an unlocked quad-core part in the budget tier segment.

Intel Core i9-10900K – 10 Cores, Up To 5.3 GHz Single-Core, 4.9 GHz All-Core at $488 US

The Intel Core i9-10900K will be the flagship part of the 10th Generation Desktop CPU family. Intel has a few tricks up their sleeves to offer even better performance than the Core i9-9900KS. The i9-10900K features 10 cores, 20 threads a total cache of 20 MB and a 125W TDP. The chip has a base frequency of 3.7 GHz and a boost frequency of 5.1 GHz. However, using Intel’s Turbo Boost Max 3.0 technology, the chip can boost up to 5.2 GHz on a single-core and what’s even better is the 4.9 GHz all-core boost. Some of the features of this particular chip include:

Up to 4.8 GHz All-Core Turbo

Up to 5.3 / 4.0 GHz Thermal Velocity Boost Singe / All-core Turbo

Up to 5.2 GHz Intel Turbo Boost Max 3.0

Up to 10C and 20T

Up to DDR4-2933 MHz dual-channel

Enhanced Core & Memory Overclocking

Active Core Group Tuning

Here’s the interesting part, the chip would also get Thermal Velocity Boost, similar to the current flagship parts. CPUs that support this algorithm, like the Core i9-10900K, would feature even faster boost frequencies of 5.3 GHz (single-core) and 4.9 GHz (all-core). However, as the name suggests, only top-tier cooling solutions would be able to allow full utilization of the Thermal Velocity Boost feature. So unless you rock a high-end AIO liquid cooler or a closed-loop setup, don’t expect a sustained velocity boost but rather short bursts until the threshold is hit. It will be interesting to know the full extent of the features that this function has to offer and what kind of cooling would the Core i9-10900K requires in general. A few benchmarks of the Core i9-10900K versus the AMD Ryzen 9 3900X CPU can be found here.

Intel Core i7-10700K – 8 Cores, Up To 5.1 GHz Single-Core, 4.7 GHz All-Core at $374 US

The Intel Core i7-10700K would be featuring 8 cores and 16 threads. The chip would house 16 MB of total cache and a TDP of 125W. The chip would feature a base clock of 3.8 GHz, a boost clock of 5.0 GHz (single-core), and 5.1 GHz (single-core) with Turbo Boost Max 3.0. The chip will be 100 MHz faster in single-core but slower across all-cores by 100 MHz than the Core i9-9900K which retails for over $500 US. Since this is an i7 part, expect pricing to fall around $350-$400 US.

Intel Core i5-10600K – 6 Cores, Up To 4.8 GHz Single-Core, 4.5 GHz All-Core at $262 US

The Intel Core i7-10600K would be featuring 6 cores and 12 threads. The chip would house 12 MB of total cache and a TDP of 125W. The chip would feature a base clock of 4.1 GHz, a boost clock of 4.8 GHz (single-core), and 4.5 GHz (all-core). The chip would be faster than the 8th Gen flagship, the Core i7-8700K, featuring a higher base and boost clock across a single and all-cores. The Core i5 should be retailing in the $220-$270 US segment which is a decent price for a fast 6 core and multi-threaded chip.

Intel 10th Gen Comet Lake-S 65W Desktop CPU Family

The rest of the lineup is made up of the 65W SKUs which come in Core i9, Core i7, Core i5, and Core i3 flavors. Having a 65W Core i9-10900 with 10 cores and 20 threads which still boosts up to 4.5GHz across all cores and 4.6 GHz if you include the Thermal Velocity boost is pretty good plus 5.2 GHz on a single-core doesn’t sound that bad at all, considering this is a 65W chip (at its base frequency).

The Intel Core i3 lineup is also worth pointing out as it is made up of 3 SKUs which seem to be part of the initial launch family. The Core i3-10320 would be leading with 4 cores, 8 threads at 65W. 8MB of cache, clocks of up to 4.6 GHz with a single-core, and 4.4 GHz on all-cores sound decent enough for what is supposed to be a budget chip retailing under $150US. The full list of SKUs along with their specs can be seen in the table below.

400-Series Platform and LGA 1200 Socket Support

The Comet Lake-S family would also move to a new socket known as LGA 1200. While the LGA 1200 socket has the same dimensions as the LGA 1151 socket (37.5mm x 37.5mm), the socket keying has shifted to the left side and Comet Lake is no longer electrically or mechanically compatible with Coffee Lake motherboards. Some details of the new LGA 1200 package and socket for Comet Lake:

Comet Lake will transition to a higher pin-count package

Comet Lake LGA will not have backward compatibility with legacy platforms

No changes to ILM dimensions or thermal solution retention

Comet Lake LGA improves power delivery and support for future incremental I/O features

Pin 1 orientation remains the same, but socket keying has shifted left

The good thing is that your existing coolers would still be compatible with the LGA 1200 socket so that’s one hardware change you shouldn’t be worrying about. The Comet Lake-S family will retain support for DDR4-2666 memory UDIMM and support up to 32 GB capacity DIMMs per channel.

Intel plans to have several chipsets deployed in the 400-series family. There would obviously be Z490 which will target the ‘K’ unlocked SKUs I mentioned above, but aside from that, we are looking at the W480 (Entry Workstation), Q470 (Corporate with Intel vPro), and H410 (Value) chipsets. These would target more corporate and entry tier users. Also interesting to note is that H410 is not pin-compatible with W480 and Q470 chipsets, which reveals a very cut down design for the entry-level chip.

Following are some of the main platform features of the 10th Generation Comet Lake-S family:

Up To 10 processor cores for enhanced performance

Up To 30 PCH-H High-Speed I/O lanes for port flexibility

Up To 40 PCIe 3.0 Lanes (16 CPU, up to 24 PCH)

Media & Display features for premium 4K content support

Integrated + Discrete Intel Wireless-AC (Wi-Fi/BT CNVi) Support

Intel Wi-Fi 6 (Gig+) Support

Enhanced Core and memory overclock

Integrated USB 3.2 Gen 2×1 (10 Gb/s) support

Intel Rapid Storage Technology (Intel RST)

Programmable (Open FW SDK) Quad-Core Audio DSP

C10 & S0ix Support for Modern Standby

While there would be a range of new overclocking features, Intel has also revealed that they will be thinning out the die of 10th Gen Desktop CPUs, offering more IHS space for the CPU to make use of to dissipate heat. Intel is calling it the Thin Die STIM design and is stated to improve the overall CPU thermal performance.

In terms of chipset features, W480 would be the most feature-rich of the three chipsets that are mentioned here. Z490 would be the most appealing for the enthusiast and gaming audience, but let’s take a look at the mainstream chipsets. The W480 chipset would offer a total of 46 high-speed IO lanes and a total of 40 PCIe Gen 3.0 lanes. The CPUs would retain 16 lanes with the chipset offering up to 24 PCIe 3.0 lanes.

There would be support for up to 8 SATA III ports, 8 USB 3.2 Gen 2 ports or 10 USB 3.2 Gen 1 ports, 14 USB 3.2 Gen ports, and Intel RST. Neither of the three chipsets would feature overclock support since that is restricted to the Z490 chipset but we will get more information on overclocking later on from Intel themselves. 

Intel has definitely adjusted the prices of various processors in its lineup. The Core i9-10900K with 10 cores actually features a lower price point than the $500 US+ Core i9-9900KS while being lower in price which is a great thing for consumers who were waiting to buy one. The rest of the prices are also great and the only thing that needs to be seen now is how they fare against AMD’s Ryzen 3000 in actual benchmark tests. We will know what kind of value proposition the Intel 10th Gen Core lineup holds against the AMD Zen 2 based family on the 20th of May.