MicroStar International does not plan to block overclocking of Nvidia GeForce GTX 900M-series graphics processing units inside its laptops for gamers. The company, which produces one of the finest high-end notebooks, however, warns overclockers about possible overheating and warranty issues because of overclocking.
“MSI is not planning any changes to the overclocking capabilities of MSI’s gaming notebooks,” a statement from MSI published by PC Games Hardware reads. “The latest statements from Nvidia on the subject have no effect on our product design. However, we point out that warranty and services will be voided if defects occur as a result of components operating outside of their specifications.”
Earlier this month Nvidia blocked overclocking functionality for mobile GeForce GTX 900M family of graphics processors in its latest drivers. The firm said that the GPUs were not designed to support overclocking. After the move caused massive outrage among enthusiasts, who use their high-end notebooks to play the latest PC games, the chip developer promised to return the capability in its next drivers. However, it then transpired that the company started to block overclocking support in vBIOS on its MXM cards carrying the GeForce GTX 900M chips.
Notebook makers have rights to modify vBIOS of graphics adapters they get from Nvidia. Therefore, it should not be a problem for MSI to re-enable overclocking capabilities of the GeForce GTX 965M, 970M and 980M graphics solutions.
MSI is one of the leading makers of gaming notebooks, therefore, it is not surprising at all that the company wants its laptops to have no drawbacks, such as limitations of overclocking. MSI’s flagship GT80 Titan gaming personal computer features two GeForce GTX 980M in SLI mode, up to four SSDs and a mechanical keyboard. The machine was designed to be upgradeable and handle different components. While the machine is packed with performance, it has a lot of headroom for overclocking too. KitGuru recently reviewed the MSI GT80 Titan and found it “most powerful laptop on the market” that does not accept any compromises.
It remains to be seen how other notebook vendors react to Nvidia’s initiative to ban overclocking of its notebook GPUs. If MSI and a couple of other suppliers re-activate support of overclocking in their laptops, they will get a competitive advantage over those, who do not. As a result, conservative notebook vendors will have to follow. But what if Nvidia manages to persuade everyone that overclocking is too dangerous?
Discuss on our Facebook page, HERE.
KitGuru Says: MSI is clearly doing the right thing. Even Intel offers microprocessors with unlocked multiplier for mobile computers with unlocked multiplier to enable their overclocking. Moreover, there are notebook suppliers, such as Alienware, who sell gaming laptops with factory-overclocked processors. Hence, while Nvidia can make notebook GPU overclocking difficult, it will unlikely completely eliminate it.
Well that’s good news for MSI.
Now if their motherboards would allow power limits beyond 57W for their MQ CPUs for more than 2 mins, or if they could find a way to kill the blocked TDP of the HQ chip line, they’d literally pull a zillion customers from everywhere else, even with their ridiculously high prices.
Both GT72 and GT80 have HQ CPUs that most enthusiast hate with a passion. Overclocking becomes a moot point to them if they can’t upgrade the CPUs. MXM upgrades are welcome but MSI still haven’t specified which countries will get the MXM upgrade facility. Even if it was widespread it won’t be any cheaper than the usual MXM sources on ebay/eurocom, overly expensive making them pretty waste full if someone wants to merely bench. Desktop benching is the only way to go.
Yeah, I know the HQ chips are terrible. You probably don’t even understand how bad they are. After 2.5 minutes of heavy load, they lock to 47W TDP and barely even hold stock boost clocks. This happens in unlocked fps games or super demanding games.
It’s Intel’s fault that HQ chips are the only ones available though. I don’t blame MSI all that much, but it would have been good if they could have allowed their chips to draw as much power as is set in the BIOS via XTU indefinitely like the MQ chips in Clevos/Alienwares/etc do.
And yes, laptop enthusiasts are getting stabbed in cold blood. Not only do we already pay a ridiculous amount for gimped hardware, but we now are being screwed by nVidia pushing official vBIOS specs to disable OCing (even if MSI and ASUS caught that and denied the vBIOS updates to the chips they ship). We were already getting screwed by Intel. AMD is nowhere to be found, and does not care to make a showing.
I bade farewell to the golden era of laptop enthusiasts a while ago.
?? Not sure why you had to say that, but you need to let go that condescending tone if you want convey a point and be taken seriously.
This is nothing new, older Sandy bridge chips show the same behavior. I have the M18X R1. Boost clocks are not a guarantee, it is down to electrical or thermal headroom at the time. If you are loading all logical processors then your max turbo is obviously limited. Little use in looking at max turbo for single core loads stated by Intel when determining how much of a gain you are getting for your situation without considering core by core loads. When all cores are loaded you will in the worst case get the base frequency of that chip. Turbo boost works best for non sustained work loads where there are peaks in execution demands.
As for blame, the blame is on MSI for not having the MQ version of the processors given the price they are willing to charge. Sager sells laptops with MQ processors just fine. There is no shortage of MQ supplies it is all down to the OEM when they make a choice in terms of mobo PCB designs. It saves them cost to choose the BGA option. The likelyhood for CPUs failing, apart from bad handling/abuse, is very rare, any mobo that comes back for a failure the CPUs are extracted and reused after certification on a refurb PCB if the original Mobo PCB is not salvageable. Costs, profits etc are the reasons for these decisions.
As for the rest of your post I agree, golden era of laptops for enthusiasts is coming to an end. The writing was really on the wall for sometime with the cartel that was the MXM supply chain. MSI’s bold claims of MXM upgradability remains to be seen in the next 2-3 years. I wont be surprised if it all comes down to a few countries with mostly poorly supported upgrade programs. Intel’s long term projected desire to move to BGA for most consumer products bar some desktops is the other nail in the coffin.
1 – If you set XTU or BIOS to allow your CPU extra TDP headroom, the MQ chips accept and you’re fine. If you set it for the HQ chips, the HQ chips DENY THE BIOS SETTINGS and throttle anyway. This is what I’m talking about.
2 – Intel is no longer allowing manufacturers to use MQ chips in new laptops since August 2014. The GT72 and GT80 were created afterward, and thus were, by intel’s decree, forced to use HQ line. Here I cannot fully blame MSI, which is why in my original post I stated it would be nice if their BIOS managed to allow the chips to use more than the default TDP somehow. Only existing models (PxxxSM-A from Clevo, GT60 and GT70 from MSI) get to keep using the MQ chips.
The reason Alienware didn’t continue using MQ is because their original system design does not like the 900M chips. They have throttling issues, especially beyond stock, and have issues with the fans working on the slave cards in the SLI model, etc. Due to needing to make new models to properly support the 900M chip line, they were forced to HQ chips as well.
If you notice, clevo’s two new lines contain HQ chips and desktop CPUs. If the SM-A boards were to be changed in some way, they’d have to re-make them with HQ chips… so they didn’t bother, and simply updated their BIOS for the 900M chips.
I was not trying to be condescending when I said you did not likely realize exactly how bad the HQ chips are. Your statements, however, about power allowance available at the time show I was right that you didn’t know about the difference in accepting BIOS settings. Sandy Bridge and Ivy Bridge chips can be tuned to draw extra power (as much as the board will give). I know some of the earlier machines (like the M17x R4) had a board limitation of somewhere around 67-70W on the chips, but it’s FAR better than the 47W these things have now (and haswell draws more power than both Sandy and Ivy bridges do)
1) XTU or BIOS settings are irrelevant if OEM has decided to lock down or limit processor current limit. Nothing to do with the CPU being MQ or HQ
2) Nowhere I have heard of this news officially that Intel forced OEMs to start using HQ rather than MQ, link to official press release? And again technically there is nothing to do with HQ or MQ chips or HQ being more horrible than MQ, They are only different in two areas apart from package, Vt-d and GT graphics dynamic boost clocks (1.15 vs 1.2 on the HQ) These are a function of binning and fusing off features before packing.
X tuning or BIOS level settings ability being there to enable extra headroom for power/current/short pwr time periods is a moot point when the cooling solutions are made to just satisfy the original rated power envelopes. Thermal headroom lacking results in throttling anyway worse yet AW18 owners have reported full on hangs and reboots. You seem to be under the impression that merely increasing the processor current limit when possible allows for far better overclocking on these machines. Take a good around for people complaining about the AW18 for example where they are unable to boost anything beyond 88.000A because the fans don’t rev up until everything gets nice a toasty. We could use HWinfo fan control in the past on M18Xs but the AW18 users are out of luck last time I checked with that mode of control effectively messing up the GPU fan when you tried to force the CPU fans to max speeds. They were hoping for some BIOS update to solve the fan tables, I wont be surprised if nothing happened on that front so far in typical Dellienware fashion. Even after that what good does that do? the cooling solution is not made to dissipate the extra heat generated from such overclocking endeavors. Benchers don’t care they go for extreme cooling because they bench, not an issue for them but it is an issue for people who are looking for desktop like overclocks for daily usage on all tasks including gaming. The issues faced by regular overclockers is not a big deal they are not really overclocking that much because of the lack thermal headroom. The real issue here is that extreme benching enthusiasts are pissed because they cant swap out the CPU. For the regular user the ability to swap out the CPU for some upgrades is also denied so that is another aspect that sucks, although this is debatable again given the costs of buying one of these chips on ebay or somewhere.
3) On your claims as to why Alienware didn’t use MQ chips: Alienware having issues with throttling on 900Ms yeah I know about that but that has nothing to do with the 2015 models, it is an issue seen on the older AW18s and M18X R2s. I am sure you might have got that answer from Dell reps or MSI reps or whoever but it makes absolutely no sense because the choice of CPU packaging has no baring on the rest of system’s thermal and fan control handling. M18X had separate heatpipes and cooler for the CPUs and GPUs. Now they did away with the AW18 model because they want to push the graphics AMP down their loyal customer’s throats.
Give me the exact technical reason how the MQ CPU and corresponding Mobo design “hampers” 900M operation. What is the difficulty in implementing a FSM to have proper fan controls based on TherDiode feedback present on the ASICs? How exactly does the package have any control over logic of FSMs found in the vBIOS? What bus or protocol layers are causing an issue if any and how? You wont find one because there exists none beyond the PR spins their reps put out on NBR and similar forums where enthusiasts are furious.
Again about your last paragraph, there are lots of misconceptions going around in the enthusiast circles about how Intel’s Tubro Boost 2.0 works. What I have stated is correct from the outset, no matter your ability to increase the processor current limit and/or core current limits, you are working within the confines of Dynamic Thermal Capacitance set by hardware with the accumulated power budget after idle periods or say less loaded conditions.
You guys want to complain about something the best place to start would be better cooling solutions before the need for ability to tweak via BIOS/XTU because without that it is all moot. Asetek teamed up with Alienware for a demo of a water cooling, whatever happened to that project. If people are paying premium price then people deserve to get premium specs and abilities before thinking about overclocking, that part will naturally follow.