In a story posted today on EETimes, Altera announced at the ARM Developers Conference that they have entered into a partnership with Intel to have their next generation 64-bit ARM chips produced at Intel’s fabs. According to the report, Altera will be using Intel's upcoming 14nm FinFET process technology to manufacture a Cortex-A53 quad-core SoC, which will be surrounded by FPGA logic.

The Intel/Altera partnership was first announced back in February 2013, and it's worth noting that FPGAs are not an area where Intel currently competes. Even though ARM logic will be on the new chips, this likely won't lead to direct competition with Intel's own chips. The bigger deal of course is that while Intel's 22nm process would give anyone willing to pay Intel’s price a leg up on the competition, 14nm is a full step ahead of the competition.

Intel has apparently inked deals with other companies as well. The Inquirer has this quote from an Intel spokesperson: “We have several design wins thus far and the announcement with Altera in February is an important step towards Intel's overall foundry strategy. Intel will continue to be selective on customers we will enable on our leading edge manufacturing process.”

The key there is the part about being “selective”, but I would guess it’s more a question of whether a company has the volume as well as the money to pay Intel, rather than whether or not Intel would be willing to work with them. There are many possibilities – NVIDIA GPUs on Intel silicon would surely be interesting, and given that AMD has gone fabless as well we could also see their future CPUs/GPUs fabbed by Intel. There are many other ARM companies as well (Qualcomm), not to mention Apple. But those are all more or less in direct competition with Intel's own processors, so unless we're talking about potential x86 or Quark licensees, it's tough to predict where this will lead.

If we take things back another step, the reality of the semiconductor business is that fabs are expensive to build and maintain. Then they need to be updated every couple of years to the latest technology, or at least new fabs need to be built to stay competitive. If you can’t run your fabs more or less at capacity, you start to fall behind on all fronts. If Intel can more than utilize all of their fabrication assets, it’s a different story, but that era appears to be coming to a close.

The reason for this is pretty simple. We’re seeing a major plateau in terms of the computing performance most people need on a regular basis these days. Give me an SSD and I am perfectly fine running most of my everyday tasks on an old Core 2 Duo or Core 2 Quad. The difference between Bloomfield, Sandy Bridge, Ivy Bridge, and Haswell processors is likewise shrinking each generation – my i7-965X that I’m typing this on continues to run very well, thank you very much! If people and businesses aren’t upgrading as frequently, then you need to find other ways to keep your fabs busy, and selling production to other companies is the low hanging fruit.

Regardless of the reasons behind the move, this potentially marks a new era in Intel fabrication history. It will be interesting to see what other chips end up being fabbed at Intel over the next year or two. Will we see real competitors and not just FPGA chips fabbed at Intel? Perhaps some day, but probably not in the short term.

Source: EE Times

Comments Locked

44 Comments

View All Comments

  • azazel1024 - Thursday, October 31, 2013 - link

    Krysto, not sure where you are getting that. Bay Trail looks like it blows away basically all ARM CPUs right now.

    The only one that seems to be beating it out, is the brand new A7 chip, at least compared to the T100 and its z3740...which is not the fastest Intel Atom, and those are just browser benchmarks. The z3770 looks like it likely would beat out the A7 in just about everything, at least by a slight margin with its base and turbocore clock speed advantage over the z3740.

    Most other ARM chips, even pretty new ones (Tegra 4 isn't that old) seem to get spanked with the z3740 having a 20-60% advantage over them.

    Also it might well prove at the z3740/3770 is using less power than those ARM chips as well (hard to tell since can really only see wh per hour of run time for overall package power, but the T100 looks very competitive against the ARM crowd without having an idea of how much power the display and other bits are using).

    Unless Intel slips, it looks like they are dropping 14nm in Broadwell sometime in the 1Q of 2014, possibly before 20nm is available, and it'll be planar 20nm. The little I've been able to dig up says Intel is likely to drop Airmont/Cherry Trail sometime in Q2 or Q3 next year to follow shortly on its heels with Goldmont/Willow Trail around Q4 2014 or Q1 2015 with I assume the 10nm shrink sometime late in 2015.

    Merrifield is deffinitely "running late" in terms of phone introduction, but Intel looks set to stay easily at least a year ahead of its ARM competitors on process size and it still will have a technology advantage (FINFET versus planar) once 20nm drops for ARM producers. Also last I heard, most of 16nm FINFET for TSMC is going to be hybrid, with only part of the transistor being 16nm and the rest 20nm, at least at first.

    32 to 22nm was a full node for Intel. 28 to 20nm is a full node for TSMC and others. 22nm to 14nm is a full node for Intel. Unless I miss something, 20nm to 16nm is only a half node for TSMC and others...which means when TSMC goes to that, they won't have had as large a shrink as Intel will have...and Intel will be racing towards 10nm (on Atom) not too long after TSMC and others had just gotten to 16nm (and Intel might even get to 10nm before they get to 16nm, depending on Intel release capability/plans for 2015 for Atom).
  • djscrew - Saturday, November 2, 2013 - link

    10 nm in 2015? unless you're talking early enginerring samples, $5 says you're on crack
  • djscrew - Saturday, November 2, 2013 - link

    I would be shocked if any of these sub 20nm process nodes didn't get pushed back 6 months at least and more likely a year, especially those that aren't Intel or Sammy. Never mind the issue of yield.
  • abufrejoval - Tuesday, November 5, 2013 - link

    I can believe that Intel needs a broader revenue stream to maintain its fab advantage: The process advantage is constantly eroding and while the invests required to maintain it seems to also conform to Moore's law, the revenues obtainable through that process advantage are rapidly declining with saturation outside servers and servers CPUs evolving too slow or also gaining too little from the shrinks.

    Most of the transistor real-estate made available from process shrinks seems to go into caches. If I look at an x86 die photo these days, the only thing that stands out to me is these huge areas of totally regular structure implying cache. Easily 80 percent of the die area are cache while the majority of the rest may go to register renaming (also a cache) and floating point (totally useless on most big data queries).

    From what I understand about DDR4, first of all vendors are far more reluctant to move there than to look for alternate places to spread their risk.

    Next is that you really need buffer chips not just one per DIMM but one per die pack.
    Which immediately has me wonder why those couldn't move on the dies themselves, perhaps only with the first die in the pack acting as a gateway (don't think these register dies are really large or expensive to add to a memory die, even if only one out of 8 or so will actually be active).

    Then hearing about all these smart features like on-the-fly activation of spare rows I think back to graphics VRAM days 30 years ago, when VRAM included BitBlt helper functions like fast clear or color expansion to enable 1080 graphics on 8MHz (effective) GPUs like the TMS34020.

    The idea was to take advantage of being able to manipulate RAM not in bits but entire rows adding a few command pins.

    Many compute workloads today involve ultra fast searching of patterns and engineers throw megatons of silicon and watts moving huge quantities of bits and data on super wide but long highways to solve the compute problem at a distant CPU cluster while using one bit out of millions as far as RAM is concerned: It’s an enormous waste of silicon real-estate and power hungry CMOS state transitions.
    Clearly partitioning the load and moving it closer to the RAM seems the smarter approach, and why not go all the way and move the CPU power towards the memory (or the memory into the CPU) effectively producing something capable (among other things) like a map reduce chip.
    Coming back to the VRAM with built-in BitBlt and FPGAs at 14nm it would seem to me that a hybrid DRAM FPGA with a bit of ARM cores sprinkled in to do the FPGA reprogramming and some housekeeping could just be the ticket to producing application specific smart RAM on the fly, capable to do simplified operations on entire ROWs of RAM which either ARM or x86 CPUs could gather for meta processing on byte+tag sized DRAM result/content ports.
    In that scenario having to deal with JEDEC and lots of DRAM manufacturers would not only be a huge burden, but you also wouldn’t want to give valuable IP away.
    So I wonder why Intel doesn’t use all that excess FAB capacity to move into the production of SMART RAM, accelerating light years beyond the server competition.
    Thankfully this idea is a) totally crazy and undoable and b) nobody at Intel will read it anyway which is why I feel so free to post it here ;-)

Log in

Don't have an account? Sign up now