Intel NUCs Hit With Five New Security Flaws

Intel issued yet another security advisory this week, this time saying that its NUC mini PCs are vulnerable to escalation of privilege attacks. The company also released firmware patches for the mini PCs in order to mitigate the potential attacks. Researchers have found multiple vulnerabilities in NUC PCs this year.

NUC owners are strongly advised to download the latest update for their model’s firmware, which you can get from the company’s website.

According to Intel’s advisory, the five vulnerabilities could allow attackers to escalate privileges on a NUC device. Two of them received a Common Vulnerability Scoring System (CVSS) base score of 7.8, and three got a 7.5 score, all of which represent high severity vulnerabilities. 

The first flaw (CVE-2019-14608) is due to improper buffer restrictions in the NUC firmware, which could allow attackers to enable privilege escalation via local access to the device. 

The second vulnerability (CVE-2019-14610) describes improper access control in NUC firmware that could allow an authenticated user to enable escalation of privilege via local access.

The third vulnerability (CVE-2019-14609) comes from improper input validation in firmware that also lead to privilege escalation via local access.

A fourth NUC firmware flaw (CVE-2019-14611) was an integer overflow that could result in the same type of attack.

The final flaw (CVE-2019-14612) is an out of bounds write in NUC firmware that attackers could also exploit to escalate system privileges via local access.

Besides all of the speculative execution attacks against its processors, Intel has also had to issue multiple security advisories for its NUC family of devices this year. The company has been attempting to prioritize security since the Spectre CPU vulnerabilities were revealed, and, in part, that means encouraging researchers to look for vulnerabilities on its platforms. 

Intel didn’t have to ask twice because the vulnerability disclosures seem to keep on coming. It remains to be seen if Intel’s attempt to rid its products of security flaws will result in fewer bugs as the years go by, or if we’ll see an increase in bug disclosures as more researchers investigate Intel’s products.

Intel’s Manufacturing Roadmap from 2019 to 2029: Back Porting, 7nm, 5nm, 3nm, 2nm, and 1.4 nm

After some emailing back and forth, we can confirm that the slide that Intel’s partner ASML presented at the IEDM conference is actually an altered version of what Intel presented for the September 2019 source. ASML added animations to the slide such that the bottom row of dates correspond to specific nodes, however at the time we didn’t spot these animations (neither did it seem did the rest of the press). It should be noted that the correlation that ASML made to exact node names isn’t so much a stretch of the imagination to piece together, however it has been requested that we also add the original Intel slide to provide context to what Intel is saying compared to what was presented by ASML. Some of the wording in the article has changed to reflect this. Our analysis is still relevant.

One of the interesting disclosures here at the IEEE International Electron Devices Meeting (IEDM) has been around new and upcoming process node technologies. Almost every session so far this week has covered 7nm, 5nm, and 3nm processes (as the industry calls them). What we didn’t expect to see disclosed was an extended roadmap of Intel’s upcoming manufacturing processes. It should be noted that the slide presented at the conference by Intel’s partner, ASML, was modified slightly from its original source.

They say a slide is worth 1000 words. Here’s 1000 words on Intel’s future.

This is Intel’s original slide, not detailing which nodes in which years. However, it should be easy enough to figure out that each one of the elements in the bottom row is the next process node along, otherwise the +/++ wouldn’t make sense.

ASML applied these assumptions to the slide it presented at the IEDM keynote, but the company did not disclose that they had modified the slide.

So let’s go through some key areas.

1.4nm in 2029

Intel expects to be on 2 year cadence with its manufacturing process node technology, starting with 10nm in 2019 and moving to 7nm EUV in 2021, then a fundamental new node in each of 2023, 2025, 2027, 2029. This final node is what ASML has dubbed ‘1.4nm’. This is the first mention on 1.4nm in the context of Intel on any Intel-related slide. For context, if that 1.4nm is indicative of any actual feature, would be the equivalent of 12 silicon atoms across.

It is perhaps worth noting that some of the talks at this year’s IEDM features dimensions on the order of 0.3nm with what are called ‘2D self-assembly’ materials, so something this low isn’t unheard of, but it is unheard of in silicon. Obviously there are many issues going that small that Intel (and its partners) will have to overcome.

+, ++, and Back Porting

In between each process node, as Intel has stated before, there will be iterative + and ++ versions of each in order to extract performance from each process node. The only exception to this is 10nm, which is already on 10+, so we will see 10++ and 10+++ in 2020 and 2021 respectively. Intel believes they can do this on a yearly cadence, but also have overlapping teams to ensure that one full process node can overlap with another.

The interesting element to these slides is the mention of back porting. This is the ability for a chip to be designed with one process node in mind, but perhaps due to delays, can be remade on an older ‘++’ version of a process node in the same timeframe. Despite Intel stating that they are disaggregating chip design from process node technology, at some point there has to be a commitment to a process node in order to start the layouts in silicon. At that point the process node procedure is kind of locked, especially when it goes to mask creation.

In the slide, it shows that Intel is going to allow a workflow such that any first gen 7nm design could be back ported to 10+++, any first gen 5nm design could be back ported to 7++, and so on. One can argue that this roadmap might not be so strict with the dates – we have seen Intel’s 10nm take a long time to bake, so expecting the company to move with a yearly cadence on + updates alongside a two-year cadence with main process technology nodes would appear to be a very optimistic and aggressive cadence strategy.

Note that this isn’t the first mention of back porting hardware designs when it comes to Intel. With the current delays to Intel’s 10nm process technology, it has been widely rumoured that some of Intel’s future CPU microarchitecture designs, originally designed with 10nm (or 10+, 10++) in mind might actually find a home on a 14nm process due to the success of that process node.

Development and Research

Normally with process node developments, there will be different teams working on each process node. This slide states that Intel is currently in development of its 10+++ optimizations as well as the 7nm family. The idea is that the ‘+’ updates are capturing the low hanging fruit from a design standpoint every generation, and the number represents a full node benefit. Interestingly we see Intel’s 7nm being based on 10++, whereas in the future Intel sees 5nm come from the base 7nm design, and 3nm coming from 5nm. There is no doubt that some of the optimizations that enter each +/++ update will filter into future designs as and when they are needed.

In this slide, we have Intel’s 2023 node currently in the definition stage. At this IEDM conference there’s a lot of talk about 5nm in this timeframe, so some of those improvements (such as manufacturing, materials, consistency, etc.) will ultimately end up in Intel’s process depending on which design houses they partner with (historically Applied Materials). It is worth noting that 5nm is listed as a 2023 node, which is around the time that ASML will start selling its ‘High NA’ EUV machines to help with better path definition during the manufacturing process. I’m not sure if High NA will intercept at 5nm or 3nm, assuming this Intel roadmap has its dates correct and Intel is able to stick to it, but it is something to consider

Beyond 2023, Intel is currently in the ‘path-finding’ and ‘research’ mode. As always when looking this far out, Intel is considering new materials, new transistor designs, and such. At this IEDM conference we’re seeing a lot of talk of gate-all-around transistors, either as nano-sheets or nano-wires, so no doubt we’re going to see some of that as FinFET runs out of steam. TSMC is still using FinFETs for its 5nm process (Intel’s 7nm equivalent), so I wouldn’t be surprised if we see something like nano-sheets then nano-wires (or even hybrid designs) come into Intel’s manufacturing stack.

It’s worth also pointing out, based on the title of this slide, that Intel still believes in Moore’s Law. Just don’t ask how much it’ll cost.

Are the Radeon RX 5700 and 5700 XT ideal GPUs for your gaming PC?

In the battle of graphics cards, AMD and NVIDIA have been duking it out for space in your custom-built gaming tower. With the release of the Radeon RX 5700 and 5700 XT, AMD threw down two solid mid-range offerings showcasing its Navi architecture (now known as Radeon DNA or RDNA). Both 7-nanometer cards feature a minimalist design, a blower-style cooler (using a single fan to pull in air) and 8GB of GDDR6 RAM. Senior Editor Devindra Hardawar gave them scores of 85 and 86 respectively, dinging both slightly for their high temperatures and lack of real-time ray tracing.

If you opted to use one of these GPUs in a recent build we’re interested in hearing what you think: Did you also notice the blowers running hot? Were you able to squeeze some 4K performance out of the 5700 XT? Were there enough inputs for you? Did you feel like you missed out because there’s no real-time ray tracing? We want to hear what worked for you (and what didn’t) in a user review on our Radeon RX 5700 and 5700 XT product pages. After all, no one knows a product better than the people who own it, right? (Especially gamers who’ve run it through the paces on their favorite graphics-intense PC titles.) Keep your fellow readers well informed, and remember your review could be included in an upcoming user review roundup article!

Note: Comments are off for this post, however, we’d love to hear your thoughts and opinions on our Radeon RX 5700XT or RX 5700 product pages!

Qualcomm Snapdragon 865: Top 4 best things

Like clockwork, Qualcomm is updating its flagship chip, the one that most high-end phones will use in 2020. It’s called the Snapdragon 865, and it’s got a lot going for it, including a renewed focus on graphical performance and an enormous boost in AI processing capabilities that could (should?) make it much easier for developers to tap into applications that rely on machine learning and neural networks.

Aside from the big-ticket items, there are a bunch of smaller features that you should know about the Snapdragon 865 — and some decisions that may leave you scratching your head.

But first, a disclaimer

Here’s the thing about the launch of a new SoC: it’s a bunch of numbers, many meaningful and a few less so, but it’s still all theoretical until the chip is integrated into the product. For example, last year Qualcomm touted its new ultrasonic fingerprint sensor technology, saying that it was undoubtedly a better, more secure option for under-screen biometrics. Turns out that only one manufacturer, Samsung, thought so, and that decision has plagued them throughout 2019, with poor accuracy and performance and a major security scare that forced Samsung to release a zero-day patch to all of its S10 and Note 10 devices.

So when we say that the Snapdragon 865 theoretically supports 8K video capture, that doesn’t guarantee something like the Galaxy S11 or the Pixel 5 (hah!) will actually include that feature — other aspects of the phone, like the memory bus, will need to have corresponding bandwidth that Qualcomm can’t control. Think of the Snapdragon 865 and other flagship SoCs as blueprints from which the phone manufacturers build their houses. The end results rarely look the same upon completion.

The Snapdragon 865 is all about gaming

We’ll start with something particularly meaningful: gaming. While not everyone who uses a high-end phone is a gamer, it’s increasingly clear with games like PUBG Mobile, Fortnite, and Call of Duty Mobile that phones are a new vector for competitive gameplay, and as a result, companies like Qualcomm have to cater those audiences. We’re already seeing “G” series variants of Qualcomm’s mid-range chips — the new Snapdragon 765, for instance, has a separate “gaming” variant with a boosted GPU — but the Snapdragon 865 incorporates a bunch of new things.

As part of Qualcomm’s Snapdragon Elite Gaming concept, the 865 pushes for more “desktop-like” rendering, focusing on dynamic lighting, which hasn’t really been an emphasis in current mobile titles. We’re not quite talking ray tracing here, but it’s a start.

Another very cool aspect of the new Adreno 650 GPU is something called Adreno Updateable GPU Drivers, which has been a constant in the PC gaming world for years, but this will allow Qualcomm to work with specific game developers to update the Adreno GPU for that game without having to wait on Google’s monthly security patches or major platform updates. You’ll be able to download drivers using the Google Play Store, which could mean regular patches that improve game performance or fix endemic bugs. Big win!

Then there’s the sustained performance. Games use a lot of GPU power, and as the chip starts to work, it also starts to heat up. As a response, the GPU clock speeds ramp down so prevent overheating, but that also impacts performance. Qualcomm says that the Adreno 650 is much better at actually sustaining performance for the entire length of a gaming session, both because the GPU is more efficient but also by ramping down other aspects of the chip to focus entirely on what’s happening on the screen.

Finally, on the gaming side, the Adreno 650 will officially support 144Hz displays, a rumored feature of the Galaxy S11. While we’ve seen support for 90Hz and even 120Hz displays in the past year, official support for High Refresh Rate allows Qualcomm to, once again, work with game developers to ensure that the games don’t only work great when they’re reaching 120+ fps, but they also look great while doing it.

Of course, much of these advancements depends on game developers coming to the table and delivering on these consumer needs — it takes a lot of additional work to add desktop-level lighting and support for high refresh rate — but given that most of the world’s flagship phones run Qualcomm chips, the big studios will likely have no choice.

Look out for 8K, and say goodbye to your phone’s storage

4K isn’t even available on every phone yet (and definitely not 4K60) but Qualcomm is touting the ability for the Snapdragon 865 to support 8K video capture at 30fps. That’s four times the numbers of pixels as 4K, which means a lot more storage and a considerable amount more memory bandwidth needed to keep ensure smooth capture. We’re not even at the stage yet where every phone can capture 4K30 without dropping frames (ahem, Pixel 4), so I’m not entirely confident 8K will be fit for consumption when it’s theoretically available, but it’s a nice-to-have nonetheless.

Potentially more interesting on the imaging side, thanks to Spectra 480 image signal processor, is 4K video capture at 120fps, along with unlimited 960 frame-per-second slow motion, which would be a huge step up from the sub-second 960fps phones today can capture. Again, it’ll be memory bus constrained, but it’ll be supported on the chip side. Also cool: the ability to capture high-resolution photos while shooting 4K video, something that’s heretofore not been possible.

Qualcomm’s touting a ridiculous 200-megapixel photo limit, which would have sounded unreachable at the beginning of 2019, but we’re now seeing devices shipping with 108MP sensors, so anything’s seemingly possible at this point.

One of the more fascinating aspects of the new ISP is the work Qualcomm did with Dolby to offer direct-to-consumer Dolby Vision HDR capture. While Dolby Vision is increasingly touted as the HDR standard to beat in commercial productions, users will now be able to share that content directly to compatible TVs and monitors, like they could HDR10 earlier this year on the S855.

Finally — and this could be one of those subtle-but-important upgrades — is the new ISP’s ability to perform image smoothing and denoising without ruining the detail of a photo. For smaller manufacturers like OnePlus and Xiaomi that don’t want to build their own image pipeline, this could be the bridge they need to capture Pixel-quality photos in any lighting condition.

5G is included, whether you like it or not

While the Snapdragon 865 doesn’t have on-die 5G — which means it’s not directly integrated on the primary chip itself — an X55 5G modem solution will be included on every version of the chip, whether the phone will use it or not.

In other words, every Snapdragon 865 will be 5G-enabled, but in countries where there are no supported networks, that part of the modem will be disabled, reverting to LTE.

To combat accusations that because it’s not on-die, the X55 will make the Snapdragon 865 less efficient than an equivalent device with just LTE on-board, the company says that efficiency improvements made to other aspects of the chip, from faster TTS (time-to-sleep), lower-power envelope tracking and antenna power usage, will make up the difference. As a result, phones running the Snapdragon 865 should have the same or better battery life than those using its LTE-focused predecessor.

Given that 2020 is going to be the year of 5G, the X55 (which was announced back in February) will support a bunch of standards that should make its interoperability with LTE more seamless. In particular, Qualcomm is touting DSS, or dynamic spectrum sharing, as one of the ways 4G and 5G networks with co-exist nicely, as carriers can reuse the same spectrum for each standard and switch between them when the circumstances are ideal for 5G, falling back to the more mature LTE when necessary.

Also embracing the future of phone calls, most of which will happen over Bluetooth, Qualcomm is introducing aptX Voice, a “super-wide-band” Bluetooth codec meant specifically for voice calls made over wireless headphones. Given that most Bluetooth headphones are trash for phone calls, this is a welcome bit of news.

AI performance is TOPS-notch

No one really understands how dedicated AI processors work, they just know that they make machine learning- and neural network-centered applications function properly.

Well, the Snapdragon 865 has an AI engine twice as powerful as the Snapdragon 855, for what it’s worth. How that translates into real-world performance is real-world translation — at least that’s how Qualcomm is pitching it. The Hexagon DSP can take in natural language in one language and spit out another in real-time, something that Google Translate has been touting for years, but the Snapdragon claims it can do it locally, on-device, as long as the language database is present.

The numbers are pretty astounding, too: 15 TOPS, or trillions of operations per second, which is an astounding figure. More important is that the AI engine has 3MB of dedicated L3 cache so it doesn’t need to rely on the cache from the main CPU.

Perhaps more interesting is that the Snapdragon 865 has an always-on Sensing Hub, which means it can always be listening for hot words to multiple voice assistants — Google Assistant and Alexa, for instance — without impacting battery life.

The Kryo CPU portion hasn’t changed much

While we haven’t had a chance to benchmark the Snapdragon 865 just yet, we’re expecting modest gains in both single-core and multi-core performance from the Kryo 585 cores. Qualcomm claims the cores can perform up to 25% better in certain scenarios than the Kryo 485 cores in the S855, but it’s still unclear whether that will translate to real-world performance gains.

The chip is still built on a 7nm TSMC process, and it uses the same configuration of four medium-performance cores, three high-performance cores, and a Prime core with a maximum burst speed of 2.84Ghz. The four faster cores have been upgraded to ARM’s new Cortex-A77 configuration, though, which is more efficient per clock, while the low-power cores are now based on Cortex-A55.

In other words, this is going to be an impressive chip but don’t expect it to catch up to the Apple A13 (or even the A12) in terms of single- or multi-core performance, but the chip is 35% more efficient which will inevitably translate to better battery life.

It’s good, but what choice do you have anyway?

Qualcomm is everywhere. There are a lot of substantive improvements to this year’s flagship SoC, but at the end of the day, it wouldn’t really matter if the company decided to make tiny, insignificant improvements instead. Qualcomm products are ubiquitous around the world, and power basically every phone released in North America that doesn’t come from Apple.

At the same time, Qualcomm’s under considerable pressure to match the speed of innovation set by Apple’s A-series chips in recent years, and must maintain their lead against HiSilicon (and to a lesser extent, the diminishing Exynos, and while it’s failed to compete on a pound-for-pound compute level, Qualcomm continues to outpace every other chipmaker in the breadth of its feature set.

Of course, this year is all about 5G, and there’s no lack of consternation around the fact that the Snapdragon 865, unlike its cheaper Snapdragon 765 counterpart, doesn’t have it integrated. That doesn’t mean it’s not essential to the success of the chip, just that we haven’t quite reached the point of maturity for the new standard that Qualcomm and most U.S. carriers would like us to think.

AMD may be spicing up the graphics card game with the Radeon RX 5500 XT

AMD may not keep us waiting much longer for the anticipated Radeon RX 5500, which the company unveiled back in October. The new graphics card, particularly the rumored Radeon RX 5500 XT could be coming next week, according to VideoCardz.

There’s still little detail on the Radeon RX 5500 XT. We know the 5500-series graphics cards will have 22 compute units with 1408 stream processors, a bit more than half of those found on the RX 5700 XT. The new cards will use a 128-bit memory bus (half that of the 5700-series) and offer up to 8GB of GDDR6, according to an official slide from AMD, shared by Hot Hardware.

That leaves some room for guessing as to just what the RX 5500 and 5500 XT could do. The two cards would need to differentiate somehow, and if it’s not in stream processors or clock speeds, it could be in memory. VideoCardz believes the RX 5500 may come exclusively with 4GB of GDDR6 memory while the RX 5500 XT could offer 4GB and 8GB variants, similar to the way the Nvidia GeForce GTX 1060 has come in 3GB and 6GB variants.

A contest of variety

AMD is playing a hard game against Intel on the processor side and Nvidia on the graphics processor side. This has been working for AMD in the processor fight, with Ryzen CPUs stealing market share from Intel, but Nvidia’s cards still offer the most power at the high end.

AMD has still managed to make progress against Nvidia’s market share, and these new Radeon RX 5500-series cards could help it snag even more of the budget market. Team Red will be positioned to challenge Nvidia’s GeForce GTX 1650 and 1660 models, including the Ti and Super versions. Given that Nvidia effectively has six graphics processors in the low-to-mid-range space, it makes sense for AMD to introduce more than just one version of the RX 5500.

Leaked benchmarks have shown the Radeon RX 5500 giving the GTX 1650 serious competition, and a higher-spec 5500 XT could be the card to run against the GTX 1660. If the new graphics cards come out this month, we may get to see just how the competition heats up just in time for the next big shopping rush.

AWS Designing a 32-Core Arm Neoverse N1 CPU for Cloud Servers

Amazon Web Services’s CPU design unit is working on a new multi-core processor for AWS servers. The new CPU is said to use Arm’s new Neoverse N1 architecture and would feature a considerably higher core-count when compared to AWS’s first-generation Graviton processor, which should result in a significant performance increase.

The yet-to-be-named AWS CPU will be based on Arm’s Neoverse N1 microarchitecture and will integrate as many as 32 cores, according to Reuters, which cites two sources with knowledge of the matter. The chip will also be able to connect to various special-purpose accelerators using a ‘fabric’ interface to greatly speed up certain workloads.

On a high level, the Neoverse N1 (aka Ares) to a large degree resembles Arm’s consumer-oriented Cortex-A76 microarchitecture: a 4-wide fetch/decode machine with a pipeline depth of only 11 stages that can reduce itself to 9 when needed. Meanwhile, the Neoverse N1 is designed to run at relatively high frequencies to provide maximum single-thread performance, it has a different cache architecture (coherent, with 1 MB L2 option, yet caches are technically not a part of the microarchitecture per se), and some other enhancements. Overall, with the Neoverse N1 Arm is looking at clocks of up to 3.1 GHz and a ~100 W TDP per SoC.

Readers who are interested to find out more about Arm’s Neoverse N1 platform can read our coverage from earlier this year, but the key thing in the context of the newly released information is that AWS continues to believe in custom Arm-based processors for servers and would be among the first adopters of the Neoverse N1. As noted above, the microarchitecture and the platform were optimized for cloud server workloads from the ground up, so with with further customization from Amazon, the 32-core processor promises to offer rather serious performance in applications that it was designed for. Will these CPUs challenge AMD’s Rome or Intel’s Cascade Lake? Probably not, but the importance of custom chips is their ability to offer the right total cost of ownership and sufficient performance, not win all the benchmarks.

Weird MCM Based Intel Xeon SoMa Processors Spotted Online, House 4 Dies Per CPU

Okay, time for something spicey that has just come down the grapevine. One of our favorite twitter accounts, Momomo_US, initially posted a weird-looking processor that was hosting 4 MCM-dies. The IHS of the processor looks distinctly like that of Intel and I decided to do a bit of digging around and boy did I find a lot of trails. Before I get on with all the gory details, I just wanted to remind you to mind that rumor tag and keep a jar of salt handy. Intel has never admitted to pursuing an MCM-based strategy but if this CPU turns out to be anything other than a very large scale hoax (unlikely), it would indicate that they have at the very least, tested MCM designs in their labs.

Intel Xeon SoMA processors with 4 dies spotted on Chinese forums and on eBay

Okay before you get all excited, I will preface this story by saying that this is almost certainly a very old part – and not a leak of a new and upcoming SKU – so even in the very best case scenario it simply prooves that Intel dabbled in MCM designs. In fact, the packaging puts it at an LGA 1156-ish design and the batch number puts this CPU as having been made almost 4 years ago. The package was made in Malaysia.

Based on all this info, you are looking at either of three things:

A very old and abandoned project for MCM designs (likely),

A project that Intel has not revealed yet (anyone’s guess) or

An elaborate hoax where someone faked over 100,000 chips (unlikely).

The IHS design also puts it in the bracket of the Skylake family which makes me believe this is something they started testing shortly after that. 

The lack of transistors/SMDs on the package mean this is not a working processor and simply an early demo chip – this is further corroborated by that very weird part number and the lack of an Intel Confidential logo (a standard Intel logo without the words Intel in the center). The fact that “MALAY” is mentioned on the packaging also further corroborates this theory since Intel has three testing/packaging sites based there: PG8, KM5 and KMO (in Penang and Kulim).

Here is where things get exciting though. The dies shown here are close in size to dual-core Skylake dies and 4 dies would make this an 8 core chip. The fact that this chip was made roughly four years ago means that if this was something Intel was actively pursuing they would have had ample time to perfect the technique and implement it.

The next thing I did was give the part numbers a quick google and lo and behold these processors are available on eBay as well. The fact there are so many of these CPUs (according to a Reddit thread, almost 100,000 of these quirky little processors are out there) makes it very unlikely that this is all an elaborate hoax.

I would assume that these were part of Intel’s internal testing for viable alternatives for scaling performance all those years back. It is anyone’s guess whether those tests turned out to be positive and whether Intel is working on an MCM strategy or this is a project that was ultimately abandoned.

Intel has been competing with AMD using its traditional monolithic approach to processors and unfortunately, that can’t really scale to compete with MCM based designs like the one AMD employs. Limited by the reticle size, you can only expand a monolithic die so much – not to mention yield has an inverse relationship with die size. MCM based designs on the other hand not only benefit from increased yield but are also able to scale almost infinitely. If Intel decides to shift to MCM based designs, it can quickly ramp up its performance numbers – something it direly needs to do in order to stay competitive with AMD. For their sake, I hope this project was one that ended successfully and had just been keeping a close lid on and not something that was eventually abandoned.

Intel Tiger Lake CPU Architecture – Potential HEDT-Like 10nm Cache Rebalance Incoming

Back when Intel unveiled the Skylake microarchitecture, Intel proceeded to rebalance the cache structure of its CPUs within the HEDT lineup. According to Geekbench, a similar cache rebalancing may be coming to the upcoming 10nm Tiger Lake mobile CPUs.

Intel’s Tiger Lake CPU Cache Structure Optimization – HEDT-Like 10nm Cache Rebalance Incoming

Assuming Intel follows similar actions taken prior to the release of HEDT Skylake-X CPUs, Intel may continue the trend to Tiger Lake. In the case of Skylake-X, Intel reduced the amount of L3 cache in favor of increased lower-latency L2 cache, though, with Tiger Lake-Y, Intel may bring improvements in each L1, L2, and L3 caches. Previously, mobile and desktop CPUs shared the same cache structure, but with a redesign of the cache, Intel could bring increased efficiency to mobile CPUs.

Taking a look at the Geekbench report, the system in question is running a Tiger Lake-Y CPU and features four cores and eight threads. This chip features a significantly altered cache with a large 1,280KB (1.25MB) of L2 cache per core adding up to 5,120KB (5MB) of total L2 cache. This amount of L2 cache is a 400% improvement over its predecessor. Along with the significant increase of L2 cache, Intel’s sample CPU boasts a 50% L3 cache size improvement at a total of 12MB.

As for the L1 cache, Intel has improved the size of the L1 Instruction cache to 48KB, though the L1 Data cache remains at 32KB. The sample CPU tested appears to overcome a trade-off required with existing Skylake-X CPUs; increased L2 cache does not come at the expense of reduced L3 cache, therefore performance increases across the board. Other improvements expected from Tiger Lake include the introduction of PCIe 4.0, a feature currently available exclusively to AMD’s X570 and TRX40 platforms, and Intel Xe iGPUs up to 96 EUs.

Availability & Process

The Tiger Lake CPU family is anticipated to hit the market sometime within the 2020 – 2021 timeframe, though I would take this with a grain of salt as the10nm process still has a way to go before full-scale production. Tiger Lake is set to be the successor to Intel’s first-generation 10nm Ice Lake and will take shape as the optimization step within Intel’s Process-Architecture-Optimization model as the third-generation 10nm variant built by Intel (10nm++).

T-Force Dark-Z 16GB DDR4-3600 Review: Fast, Cheap and Stable

Most of us have seen Venn diagrams where the circles don’t meet in the middle, with statement such as “Fast, Cheap, Stable” filling those circles. But what if you could have DDR4-3600 for only $15 more than competing mainstream DDR4-3200 kits? That’s what Team Group’s aims to deliver with its T-Force Dark-Z kit (model TDZGD416G3600HC18JDC01), a part number that’s improbably easy to decipher but impossibly difficult to remember.

Covered in aluminum heat spreaders stamped with a 3D design which oddly appears to minimize contact with the chips beneath, each kit includes a case badge and installation instructions. Anyone savvy enough to read this probably doesn’t need those instructions.

Concerning module programming, the good news is that this kit defaults to DDR4-2400 prior to setting XMP mode, where some competitors default to only DDR4-2133. A DDR4-2133 configuration is also available, if your system needs it.

The bad news: the 22-cycle tRCD and tRP of its DDR4-3600 XMP appears uncompetitive. Returning to the positive side, the 18-cycle CAS of this kit is good enough to at least call it “performance” RAM, since we define a “low latency” setting as less than 1 cycle for every 100 MHz frequency (200MHz data rate). If you need a refresher on what any of that means, you can check out the middle section of our PC Memory 101 feature.

Comparison Hardware

Teamgroup’s T-Force Dark-Z goes up against competing DDR4-3600 kits from Crucial’s Ballistix Gaming brand and Geil’s EVO series, mounted to MSI’s memory-mastering MEG X570 Ace and controlled by AMD’s stellar Ryzen 7 3700X. Toshiba’s OCZ RD400 512GB and Gigabyte’s GeForce RTX 2070 Gaming OC 8G push any system bottlenecks back towards the CPU and DRAM.

T-Force Dark-Z DDR4-3600 reached DDR4-4266 on our AMD platform at timings that were barely worse than rated XMP settings. But as great as that sounds, it still fell behind the two competing kits. Fortunately for Team Group,the company’s kit is also cheaper.

Lowest Stable Timings at 1.35V (Max) on MEG X570 ACE (BIOS 1.20)

Interestingly, the T-Sorce Dark-Z kit’s super-long XMP latencies aren’t nearly a requirement for the stability of our system. We pushed those all the way from 18-22-22-42 to 16-18-18-36 without issue. With CPU and GPU overclocking often delivering diminishing returns, it’s a shame that so few casual overclockers take the time to optimize their memory.

Benchmark Results

Bandwidth differences between these three kits are too small to cause most buyers to care at XMP settings, so they probably won’t care that the Dark-Z took third place. The same is true in Sandra Memory Latency.

T-Force Dark-Z also loses in most of our gaming tests, but again by margins that are too small to elicit a buy-or-deny response from most builders.

XMP performance losses in are slightly more noticeable in 7-Zip, but still not enough to make most buyers care. Heck, t0he T-Force kit even creeped up to second place when all three kits were tuned to their tightest stable timings.

The key to the T-Force Dark-Z’s marketing is that this memory is cheap enough to make cautious buyers question its quality, yetstill carries the same type of lifetime limited warranty that most of its competitors boast. Combining that low price with mediocre timings for its high data rate, we have no problem recommending it as a value pick for performance enthusiasts.

The Intel-AMD Showdown

It’s beginning to look like a really interesting market. After years of essentially one player taking all the spoils, we now have a significant second that has the technology to make a difference.

The incumbent is of course Intel INTC, which just apologized to customers for continued supply constraints, despite double-digit growth in supply from the first half to the second while maintaining its raised guidance (fourth-quarter revenue of $19.2 billion and EPS of $1.28 and full-year revenue of $71 billion and EPS of $4.42).

Unanticipated increase in PC demand is not the only issue however. Intel is also trying to maintain 14nm production while ramping up 10nm. It’s building out capacity and one might argue why this wasn’t done before while there were all the product delays. But supply-demand parity hasn’t been achieved yet and management hasn’t said anything about when it will be. So that’s that.

On the other side, we have challenger Advanced Micro Devices AMD, which is announcing one breakthrough device after another, significant partnerships with companies like Alphabet’s GOOGL Google and Amazon AMZN and manufacturing alliances with leading edge foundry Taiwan Semiconductor TSM, which is making its 7nm chips.

It’s true that NVIDIA NVDA is also a player, but it doesn’t have x86 at the core and is instead focused on GPUs and the largescale parallel processing they enable. This is making waves at HPC and the data center in general (especially after the Mellanox acquisition). But NVIDIA doesn’t play in all the x86 categories like PCs, mobile computing and data center the way AMD does. So Intel and AMD are the main rivals.

Just for perspective, let’s see what the two companies brought in last quarter. Intel generated $19.19 billion in revenue and $1.42 in EPS (up $27 million and 2 cents a share, respectively, or more or less level with year-ago revenue and EPS in percentage terms). AMD brought in $1.80 billion in revenue and $0.14 in EPS (up $148 million (9.0%) and 4 cents a share (40%), respectively from last year).

So AMD’s revenue and per share earnings are about 9% of Intel’s, making it almost like a David-Goliath situation. Moreover, Intel has $12.025 billion on its balance sheet while AMD has a mere $1.209 billion. AMD also generates a small fraction of the cash flow that Intel does.

The difference in financial position between the two companies, despite the fact that AMD is growing much faster, means that Intel has significant resources to go into a very aggressive price war that AMD will find it harder to sustain. This can especially hurt the company as it preps to target mid and high-range segments (especially in mobile computing).

Intel will no doubt adopt this strategy as its design teams have fallen behind and semiconductor designs take years to develop, which is followed by qualifying the production process, ramping production and bringing to market. Intel being the foundry for its own chips has the added difficulty that it doesn’t rely on expertise that’s already available (it may however do this if necessary).

Intel’s management team is also not what it used to be. Given these difficulties and the fact that current management has admitted to a likelihood of share losses to AMD, all the traditional Intel partners have their Plan B, which is AMD. So they have at least some AMD-powered models and systems.

But what about the valuation? At 12.29X forward twelve months’ earnings, INTC shares are trading in a tight range, but below its median value over the past year. They’re also trading below the S&P 500’s 17.97X. So Intel shares are undervalued. AMD on the other hand is trading at 48.02X, which is close to the high end of the range over the past year, so it’s obviously overvalued.

That would be unless it consistently beats estimates over the next year by a significant margin. So for comparison, let’s take a look at its surprise history. The company has met estimated earnings in each of the last two quarters, it beat by 21% in the quarter prior to that and missed by 11% before that, averaging a 2.2% gain in the last four quarters. This doesn’t look too significant. If AMD doesn’t significantly top expectations in the next year, the current valuation looks untenable.

That’s why we have a Zacks Rank #3 (Hold) on AMD shares and a Zacks Rank #2 (Buy) on INTC shares. You can see the complete list of today’s Zacks #1 Rank (Strong Buy) stocks here.