A quick news piece on information coming out of Intel’s annual Investor Day in California. As confirmed to Ashraf Eassa by Intel at the event, Intel’s 8th Generation Core microarchitecture will remain on the 14nm node. This is an interesting development with the recent launch of Intel’s 7th Generation Core products being touted as the ‘optimization’ behind the new ‘Process-Architecture-Optimization’ three-stage cadence that had replaced the old ‘tick-tock’ cadence. With Intel stringing out 14nm (or at least, an improved variant of 14nm as we’ve seen on 7th Gen) for another generation, it makes us wonder where exactly Intel can promise future performance or efficiency gains on the design unless they start implementing microarchitecture changes.

Despite this, if you were to believe supposed ‘leaked’ roadmaps (which we haven’t confirmed from a second source as of yet), the 8th Generation product ‘Cannon Lake’ is more geared towards the Y and U part of Intel’s roadmap. This would ring true with a mobile first strategy that Intel has mirrored with recent generations such that the smaller, low power chips are off the production line for a new product first, however we'd also expect 10nm to also be in the smaller chips first too (as demonstrated at CES). Where Cannon Lake will end up in the desktop or enterprise segment however remains to be seen. To put something a bit more solid into this, Ashraf also mentioned words from Dr. Venkata ‘Murthy’ Renduchintala, VP and GM of Client and IoT:

‘Murthy referred to it at the event, process tech use will be ‘fluid’ based on segment’.

If one read too much into this, we may start seeing a blend of process nodes for different segments at the same time for different areas of the market. We already do have that to some extent with the mainstream CPUs and the HEDT/Xeon families, but this phrasing seems that we might get another split between consumer products or consumer and enterprise. We may get to a point where Intel's 'Gen' naming scheme for its CPUs covers two or more process node variants.

Speaking of the Enterprise segment, another bit of information has also surfaced, coming from a slide during a talk by Diane Bryant (EVP/GM of Data Center) and posted online by Ashraf. The slide contains the words ‘Data center first for next process node’

We can either talk about process node in terms of the ‘number’, either 14nm/10nm/7nm, or by variants within that process (high power, high efficiency). One might suspect that this means Intel is moving hard and fast with 10nm for Xeons and big computing projects, despite showing off 10nm silicon at CES earlier this year. That being said, it’s important to remember that the data center market is large, and includes high-density systems with many cores, such as Atom cores, and Intel did recently open up its 10nm foundry business to ARM Artisan IP projects. So while the slide does say ‘Data center first’, it might be referring to DC projects based on ARM IP in that segment rather than big 4-24+ core Xeons. At this stage of the game it is hard to tell.

On top of all this, Intel still has extreme confidence in its foundry business. An image posted by Dick James of Siliconics from the livestream shows Intel expects to have a three-year process node advantage when its competitors (Samsung, TSMC) start launching 10nm:

I’ve been brief with this news for a reason - at this point there are a lot of balls in the air with many different ways to take this information, and the Investor Day is winding down on talks and finishing with smaller 1-on-1 meetings. We may get further clarification on this news as the day goes on.

Update 1: On speaking with Diane Bryant, the 'data center gets new nodes first' is going to be achieved by using multiple small dies on a single package. But rather than use a multi-chip package as in previous multi-core products, Intel will be using EMIB as demonstrated at ISSCC: an MCP/2.5D interposer-like design with an Embedded Multi-Die Interconnect Bridge (EMIB).

An Intel Slide from ISSCC, via PC Watch

Initially EMIB was thought of as a technology relating to Intel's acquisition of Altera and potential future embedded FPGA designs, and given the slide above and comments made at the Investor Day, it seems there are other plans for this technology too. The benefit of using multiple smaller dies over a large monolithic 600mm2 die is typically related to cost and yield, however the EMIB technology also has to be up to par and there may be a latency or compatibility trade-off.

Source: Intel, @TMFChipFool

Comments Locked


View All Comments

  • Meteor2 - Thursday, February 16, 2017 - link

    That's a good link. As we say, TSMC's 7 nm is equivalent (in all meaningful senses) to Intel's 10 nm. Intel is manufacturing 10 nm now. Let's see if TSMC make good on their promise of their 7 next year, when they've only begun their 10 this year. I'm not saying it's impossible, just that it's unlikely. And TSMC will not make another node jump, down to what they call 5 NM, in just two years in 2019. That's not going to happen.
  • fanofanand - Friday, February 10, 2017 - link

    AMD used their revenue from consoles to keep them afloat while they put the finishing touches on Zen and Polaris. I agree Intel could have repurposed the cash they wasted on mobile but if you think AMD doing consoles was bad for them, I'm glad you aren't managing my money.
  • HollyDOL - Friday, February 10, 2017 - link

    Huh, I really don't know what to think about this. Guess before I pass any judgement I'll wait for Ian's review when he gets the chips.
  • Frenetic Pony - Friday, February 10, 2017 - link

    So how many more billions of dollars is the industry going to dump into EUV/silicon before they actually throw money at graphene or black phosphorous or something?
  • witeken - Friday, February 10, 2017 - link

    It takes a lot of years for things to go from discovery, to research to development to production. Diane Bryant said it took 16 years for silicon photonics to go through all of this.
  • Amandtec - Friday, February 10, 2017 - link

    The last few nodes have shown little performance increases but big efficiency increases. Efficiency matters far more in the data centre than it does in your laptop, where the screen is using most of the power most of the time.
  • Bronek - Friday, February 10, 2017 - link

    As Herb Sutter pointed out over 10 years ago, the free lunch is over. We are not going to get any significant performance improvements per core any more. There are too many constraints on CPU speed now and none of these can be removed by sheer number of transistors: there is switching speed which does not increase any more with the new process, there is memory latency etc. The only way vendors can now sell their chips as "performing better than the competition" is by providing more cores, more threads per core (see IBM POWER8 and POWER9) and ensuring these work their best even under full load. Providing better and faster interconnects is also an option. But this also means that application will not get any faster unless they tap into multi threaded execution environment or make more use of connected accelerators, which is often tricky. This means very little for you average gaming desktop machine, since the only accelerator you need is a GPU, and the interconnect (i.e. ePCI) is rarely a bottleneck for your typical application i.e. game. I would say that this category of CPUs had already reached a plateau. Which is good news, because that means the fight had moved to data centers where competition to Intel is long overdue.
  • fanofanand - Friday, February 10, 2017 - link

  • A5 - Friday, February 10, 2017 - link

    Well, the screen using more power than the CPU block is a fairly recent development due to said efficiency gains. There was a time not that long ago where that was definitely not the case.
  • BrokenCrayons - Friday, February 10, 2017 - link

    On my Compaq 486DX2 @ 66MHz and my Texas Instruments Pentium @ 90 MHz, the screen was the highest demand component and neither screen was higher than 640x480 or larger than 12 inches. The screen's power consumption is absolutely not a new problem. Both of those laptops were passively cooled which speaks to their lack of TDP and, by inference, lack of electrical power demand. The same was probably true of my monochrome panel 386SX @16Mhz which was my very first laptop.

Log in

Don't have an account? Sign up now