The Intel Skylake i7-6700K Overclocking Performance Mini-Test to 4.8 GHz
by Ian Cutress on August 28, 2015 2:30 PM ESTFrequency Scaling
Below is an example of our results from overclock testing in a table that we publish in with both processor and motherboard. Our tests involve setting a multiplier and a frequency, some stress tests, and either raising the multiplier if successful or increasing the voltage at the point of failure/a blue screen. This methodology has worked well as a quick and dirty method to determine frequency, though lacks the subtly that seasoned overclockers might turn to in order to improve performance.
This was done on our ASUS Z170-A sample while it was being tested for review. When we applied ASUS's automatic overclock software tool, Auto-OC, it finalized an overclock at 4.8 GHz. This was higher than what we had seen with the same processor previously (even with the same cooler), so in true fashion I was skeptical as ASUS Auto-OC has been rather hopeful in the past. But it sailed through our standard stability tests easily, without reducing in overclocking once, meaning that it was not overheating by any means. As a result, I applied our short-form CPU tests in a recently developed automated script as an extra measure of stability.
These tests run in order of time taken, so last up was Handbrake converting a low quality film followed by a high quality 4K60 film. In low quality mode, all was golden. At 4K60, the system blue screened. I triple-checked with the same settings to confirm it wasn’t going through, and three blue screens makes a strike out. But therein is a funny thing – while this configuration was stable with our regular mixed-AVX test, the large-frame Handbrake conversion made it fall over.
So as part of this testing, from 4.2 GHz to 4.8 GHz, I ran our short-form CPU tests over and above the regular stability tests. These form the basis of the results in this mini-test. Lo and behold, it failed at 4.6 GHz as well in similar fashion – AVX in OCCT OK, but HandBrake large frame not so much. I looped back with ASUS about this, and they confirmed they had seen similar behavior specifically with HandBrake as well.
Users and CPU manufacturers tend to view stability in one of two ways. The basic way is as a pure binary yes/no. If the CPU ever fails in any circumstance, it is a no. When you buy a processor from Intel or AMD, that rated frequency is in the yes column (if it is cooled appropriately). This is why some processors seem to overclock like crazy from a low base frequency – because at that frequency, they are confirmed as working 100%. A number of users, particularly those who enjoy strangling a poor processor with Prime95 FFT torture tests for weeks on end, also take on this view. A pure binary yes/no is also hard for us to test in a time limited review cycle.
The other way of approaching stability is the sliding scale. At some point, the system is ‘stable enough’ for all intents and purposes. This is the situation we have here with Skylake – if you never go within 10 feet of HandBrake but enjoy gaming with a side of YouTube and/or streaming, or perhaps need to convert a few dozen images into a 3D model then the system is stable.
To that end, ASUS is implementing a new feature in its automatic overclocking tool. Along with the list of stress test and OC options, an additional checkbox for HandBrake style data paths has been added. This will mean that a system needs more voltage to cope, or will top out somewhere else. But the sliding scale has spoken.
Incidentally at IDF I spoke to Tom Vaughn, VP of MultiCoreWare (who develops the open source x265 HEVC video encoder and accompanying GUI interface). We discussed video transcoding, and I bought up this issue on Skylake. He stated that the issue was well known by MultiCoreWare for overclocked systems. Despite the prevalance of AVX testing software, x265 encoding with the right algorithms will push parts of the CPU beyond all others, and with large frames it can require large amounts of memory to be pushed around the caches at the same time, offering further permutations of stability. We also spoke about expanding our x265 tests, covering best case/worst case scenarios from a variety of file formats and sources, in an effort to pinpoint where stability can be a factor as well as overall performance. These might be integrated into future overclocking tests, so stay tuned.
103 Comments
View All Comments
Zoeff - Friday, August 28, 2015 - link
As an owner of a 6700K that's running at 4.8GHz, this is a very interesting article for me. :)I've currently entered 1.470v in the UEFI and I can get up to 1.5v in CPUz. Anything lower and it becomes unstable. So I guess I'm probably on the high side voltage wise...
zepi - Friday, August 28, 2015 - link
Sounds like a scorching voltage for 24/7 operations considering it is 14nm process... But obviously, we don't really know if this is detrimental on longer term.0razor1 - Friday, August 28, 2015 - link
I believe it is. Ion shift. High voltage = breakdown at some level. Enough damage and things go amiss.When one considers 1.35+ for 22nm high, I wonder why we're doing this (1.35+) at 14nm.
If it's OK, then can someone illustrate why one should not go over say 1.6V on the DRAM in 22nm, why stick to 1.35V for 14nm? Might as well use standard previous generation voltages and call it a day?
Further, where are the AVX stable loads? Sorry, but no P95 small in place FFTs with AVX = NOT stable enough for me. It's not the temps ( I have an h100i) for sure. For example, on my 4670k, it takes 1.22VCore for 4.6GHz, but 1.27VCore when I stress with AVX loads ( P95 being one of them).
It's *not* OK to say hey that synthetic is too much of a stress etc. I used nothing but P95 since K-10 and haven't found a better error catcher.
0razor1 - Friday, August 28, 2015 - link
To add to the above, downclocking the core on GPU's and running memcheck in OCCT is *it* for my VRAM stability tests when I OC my graphics cards. I wonder how people just 'look' for corruption in benchmarks like firestrike and call their OC's stable. It doesn't work.Run a game and leave it idle for ~ 10 hours and come back. You will find glitches all over the place on your 'stable' OC.
Just sayin- OC stability testing has fallen to new lows in the recent past, be it graphic cards or processors.
Zoeff - Friday, August 28, 2015 - link
I tend to do quick tests such as Cinebench 15 and HandBrake, then if that passes I just run it for a week with regular usage such as gaming and streaming. If it blue screens or I get any other oddities I raise the voltage by 0.01v. I had to do that twice in the space of 1 week (started at 1.45v, 4.8GHz)Oxford Guy - Saturday, August 29, 2015 - link
That's a great way to corrupt your OS and programs.Impulses - Saturday, August 29, 2015 - link
Yeah I do all my strenuous testing first, if I have to simulate real world conditions by leaving two tests running simultaneously I do it too... Like running an encode with Prime in the background; or stressing the CPU, GPU, AND I/O simultaneously.AFTER I've done all that THEN I'll restore a pre-tinkering OS image, unless I had already restored one after my last BSOD or crash... Which I'll do sometimes mid-testing if I think I've pushed the OC far enough that anything might be hinky.
It's so trivial to work with backups like that, should SOP.
Oxford Guy - Sunday, August 30, 2015 - link
If a person is using an unstable overclock for daily work it may be hard to know if stealth corruption is happening.kuttan - Sunday, August 30, 2015 - link
haha that is funny.kmmatney - Saturday, September 19, 2015 - link
I do the same as the OP (but use Prime95 and Handbrake). If it passes a short test there (say one move in Handbrake) I just start using the machine. I've had blue screens, but never any corruption issues. I guess corruption could happen, but the odds are pretty low. My computer gets backed up every night to a WHS server, so I can be fearless..