The NVIDIA GeForce 600M Lineup

As mentioned perviously, NVIDIA's GeForce 600M series basically consists of rebadges, a die shrink, and the Kepler-based GK107. NVIDIA splits their mobile graphics into two categories (three if you count the anemic GeForce 610M): Performance and Enthusiast. Note that with almost every spec, NVIDIA lists them as "up to," so expect at least some wiggle room on core and memory clocks. Technically the memory bus and memory type should be consistent across implementations, though (with the exception of the GT 640M LE). These are their Enthusiast-class GTX GPUs:

  GeForce GTX 675M GeForce GTX 670M GeForce GTX 660M
GPU and Process 40nm GF114 40nm GF114 28nm GK107
CUDA Cores 384 336 Up to 384
GPU Clock 620MHz 598MHz 835MHz
Shader Clock 1240MHz 1196MHz -
Memory Eff. Clock 3GHz 3GHz 4GHz
Memory Bus 256-bit 192-bit 128-bit
Memory Bandwidth 96GB/s 72GB/s 64GB/s
Memory Up to 2GB GDDR5 Up to 3GB GDDR5 Up to 2GB GDDR5

What we're looking at, essentially, are the GeForce GTX 580M and 570M being rebadged as the 675M and 670M; the 670M sees a minor clock bump from 575MHz but these are basically the same top-end that users have been enjoying for a while now. That's not necessarily a bad thing as the 580M and 570M are capable performers, but it certainly does leave room for a new top-end mobile GPU (GTX 680M, anyone?) at some point in the future.

Meanwhile, GK107 is pushed about as hard as it can be with the GTX 660M. We're still not certain on the actual core count as NVIDIA is being so liberal with their use of "Up to" clauses. If we assume the 660M will be the highest clocked mobile variant at launch, it will likely use all of the available cores while the lower end models will potentially trim down the number of active cores. According to NVIDIA, however, there's also some flexibility with the core counts and clock speeds, with the end goal being to deliver performance within a relatively tight range; more on this in a moment.

Astute observers will note that NVIDIA actually already has a couple of 600M series GPUs in the wild; these are rebadges of existing 40nm GF108-based GPUs, and you'll see them in the next chart which represents the top half of NVIDIA's Performance (GT) line. Note also that NVIDIA isn't providing spec memory clocks for any of these chips; they're all "up to" the values shown below.

  GeForce GT 650M GeForce GT 640M GeForce GT 640M LE
GPU and Process 28nm GK107 28nm GK107 28nm GK107 40nm GF108
CUDA Cores Up to 384 Up to 384 Up to 384 96
GPU Clock 850MHz 625MHz 500MHz 762MHz
Shader Clock - - - 1524MHz
Memory Bus 128-bit 128-bit 128-bit 128-bit
Memory Bandwidth Up to 64GB/s Up to 64GB/s Up to 28.8GB/s Up to 50.2GB/s
Memory Up to 2GB DDR3
or GDDR5
Up to 2GB DDR3
or GDDR5
Up to 2GB DDR3 Up to 2GB DDR3
or GDDR5

So starting with the GT 650M, what's weird is that the GT 650M is, at least on paper, theoretically capable of being a faster chip than the GTX 660M. We'd guess it will either have far fewer than 384 CUDA cores or will run at lower than 850MHz clocks. We've also seen the Acer M3 with the GT 640M, which did have 384 cores clocked at 625MHz. (It also used DDR3 and was paired with a ULV CPU so it doesn't represent the maximum performance we're likely to see from the GT 640M.) Note that all of the announced 28nm Kepler parts currently use the GK107 core, but NVIDIA has not provided details on the exact core counts yet. In fact, let's just get right into the crux of the problem.

At present, NVIDIA is not disclosing the exact configuration of the various GK107 parts, which means we don't know what the granularity for disabling/harvesting die will be. If GK107 uses the same 192 core SMX/GPC as GK104, we'd likely see 192 core or 384 core variants, and the charts right now suggest everything will be 384 cores with just differing clocks. With the smaller die size there's also a possibility that the chips will consist of either four 96 core GPC/SMX units or eight 48 core GPC/SMX units, and those would be the smallest functional block that can be disabled. Considering NVIDIA lists GTX 660M as "up to 835MHz" and GT 650M as "up to 850MHz", with both being "up to 384 cores", that suggests that perhaps there's more granularity available than 192 core blocks. GT 650M could have 336 cores at 850MHz or 384 cores at 740MHz and both would provide approximately the same performance. However, until we can get more information (or the parts are actually found in the wild), we can't say for sure what clocks or core counts the GK107 GPUs will use. This leads us into the next topic for these parts.

Yes, NVIDIA is up to their old tricks again with the GeForce GT 640M LE (and given some of the above, we might see even more variations on the other parts as well). I thought we were over this after the marketing nightmare that is the GeForce GT 555M. That said, if history has taught us anything, it's that any chip that supports both DDR3 and GDDR5 is almost always going to be running DDR3 once you get into this performance bracket. I'm honestly not sure how we're going to be able to tell the two GT 640M LE's apart in the marketplace, though, outside of waiting for reviews to surface, and that bothers me. Our best advice is to make sure you research what you're getting if you want faster GPU performance.

  GeForce GT 635M GeForce GT 630M GeForce GT 620M
GPU and Process 40nm GF116 28nm GF117/40nm GF108 28nm GF117
CUDA Cores 96/144 96 96
GPU Clock 675MHz 800MHz 625MHz
Shader Clock 1350MHz 1600MHz 1250MHz
Memory Bus 192-bit 128-bit 128-bit
Memory Up to 2GB DDR3/GDDR5 Up to 2GB DDR3 Up to 1GB DDR3

Speaking of the GeForce GT 555M, it's basically been rebadged as the GeForce GT 635M. Note that while NVIDIA's spec sheet lists it as only supporting GDDR5, models with DDR3 are already out in the wild. Either way, the 635M is basically a holdover from the last generation and at the risk of speculating, I wouldn't expect to see it in any great volume. NVIDIA has more profitable chips to sell, and those more profitable chips are also liable to be better citizens in terms of performance-per-watt.

Unfortunately, the GT 630M is another problem child. The 28nm variant is likely going to be much more compelling than its 40nm counterpart, as NVIDIA is estimating that the shrink basically cuts the power consumption of the chip in half while delivering the same level of performance (better actually) than last generation's very popular GeForce GT 540M. Unfortunately, just like the two GT 640M LEs, there's just no way to tell which version you're going to be getting. Ultimately, we expect the 40nm parts to all disappear and be replaced by 28nm variants, but we'll have to wait and see how that plays out.

By the way, that 28nm replacement of GF108 may not initially seem very compelling, but it should actually be a great and inexpensive option for getting decent graphics performance without requiring much in the way of cooling (let alone power consumption). The GT 540M has been a perfectly adequate performer this generation, and having that now become the baseline for mobile graphics performance at half the power draw is a good thing. The codename appears to be GF117 and NVIDIA is keeping many of the details close to their chest, but architecturally it's not simply a die shrink of GF108 and should include some additional enhancements that take advantage of the move to 28nm. Of course, die shrinks are never "simple", so just what has been enhanced remains to be seen.

Update: NVIDIA has now posted their spec pages for the above GPUs. I've gone ahead and linked them in the above table and updated a few items. Worth noting is that the GT 650M now lists clocks of 850MHz with DDR3 and 735MHz with GDDR5. It looks like both versions will have 384 cores, so OEMs will choose between more computational power and less bandwidth (DDR3) or less computational power and more bandwidth (GDDR5). NVIDIA suggested that their goal is to keep products with the same name within ~10% performance range, and the tradeoffs listed should accomplish that goal. I'm also inclined to think GK107 consists of two 192 core blocks now, as every product page using that core only states 384 cores, with the exception of the GT 640M LE, but we know 640M LE will have both 40nm and 28nm variants. In general, we'd suggest going with the 28nm GDDR5 configurations when possible, as 128-bit DDR3 has been a bit of a bottleneck for even 96 core GF108, never mind the improved Kepler chips.

Introducing the NVIDIA GeForce 600M Series Conclusion: Bring On the GeForce 600Ms
Comments Locked


View All Comments

  • jigglywiggly - Thursday, March 22, 2012 - link

    summary of article
    everything is rebadge except 680m
    except we dont know anything about it >.>
  • JarredWalton - Thursday, March 22, 2012 - link

    Right. Except, none of the 28nm parts are rebadges, and only a few 40nm parts will have a 600M name (and will likely be short lived).
  • CeriseCogburn - Saturday, March 24, 2012 - link

    Summary last page large paragraph:
    Nvidia wins big.
    AMD is way behind, is still behind, going further behind. AMD needs to step up, especially with drivers.
  • mentatstrategy - Wednesday, March 28, 2012 - link

    AMD never makes the best drivers for their hardware - you have to go 3rd party for better drivers... main reason I never bought AMD/ATI - you make good hardware but, you don't know how to make drivers for it? Nvidia FTW
  • SInC26 - Tuesday, April 10, 2012 - link

    The GTX 680M, GTX 660M, GT 650M, GT 640M, GT 630M, and GT 620M are not rebadges.
    I'm personally looking forward to the GT 640M w/ GDDR5 in the Dell XPS 15 refresh.
  • aguilpa1 - Thursday, March 22, 2012 - link

    the mobile chip marketing is so f'ed up it's not even funny.

    My only hope is a GTX 680M (not really a 680 of course) brings with it the new power efficiency we see in the 680GTX and at least boosts performance up a healthy 30% from a 580m would make it a winner in my book regardless of name BS.
  • MrSpadge - Thursday, March 22, 2012 - link

    A huge mess, indeed. Fermis in 40 and 28 nm and Keplers, all named almost similar and (almost) none of them with hard specs. Yeah, sounds like one big family...
  • Wreckage - Thursday, March 22, 2012 - link

    Optimus alone makes NVIDIA's mobile lineup superior to AMD. Until AMD can catch up, people should avoid their mobile chips
  • Wolfpup - Thursday, March 22, 2012 - link

    Optimus and other switching technologies are horrible, and I wish Anandtech would quit pushing them. Driver weirdness, stability issues, worse performance....

    I and quite a number of others I know intentionally buy notebooks WITHOUT optimus.
  • prdola0 - Thursday, March 22, 2012 - link

    This is totally false. Where did you see that Optimus would have performance reduced by ANY amount? There is no such thing. Optimus does not reduce performance. There is also no driver weirdness and stability issues. Where did you get that? I've been working with an Optimus system for more than a year now and the performance was flawless.

Log in

Don't have an account? Sign up now