The iPhone XS & XS Max Review: Unveiling the Silicon Secrets
by Andrei Frumusanu on October 5, 2018 8:00 AM EST- Posted in
- Mobile
- Apple
- Smartphones
- iPhone XS
- iPhone XS Max
The A12 Vortex CPU µarch
When talking about the Vortex microarchitecture, we first need to talk about exactly what kind of frequencies we’re seeing on Apple’s new SoC. Over the last few generations Apple has been steadily raising frequencies of its big cores, all while also raising the microarchitecture’s IPC. I did a quick test of the frequency behaviour of the A12 versus the A11, and came up with the following table:
Maximum Frequency vs Loaded Threads Per-Core Maximum MHz |
||||||
Apple A11 | 1 | 2 | 3 | 4 | 5 | 6 |
Big 1 | 2380 | 2325 | 2083 | 2083 | 2083 | 2083 |
Big 2 | 2325 | 2083 | 2083 | 2083 | 2083 | |
Little 1 | 1694 | 1587 | 1587 | 1587 | ||
Little 2 | 1587 | 1587 | 1587 | |||
Little 3 | 1587 | 1587 | ||||
Little 4 | 1587 | |||||
Apple A12 | 1 | 2 | 3 | 4 | 5 | 6 |
Big 1 | 2500 | 2380 | 2380 | 2380 | 2380 | 2380 |
Big 2 | 2380 | 2380 | 2380 | 2380 | 2380 | |
Little 1 | 1587 | 1562 | 1562 | 1538 | ||
Little 2 | 1562 | 1562 | 1538 | |||
Little 3 | 1562 | 1538 | ||||
Little 4 | 1538 |
Both the A11 and A12’s maximum frequency is actually a single-thread boost clock – 2380MHz for the A11’s Monsoon cores and 2500MHz for the new Vortex cores in the A12. This is just a 5% boost in frequency in ST applications. When adding a second big thread, both the A11 and A12 clock down to respectively 2325 and 2380MHz. It’s when we are also concurrently running threads onto the small cores that things between the two SoCs diverge: while the A11 further clocks down to 2083MHz, the A12 retains the same 2380 until it hits thermal limits and eventually throttles down.
On the small core side of things, the new Tempest cores are actually clocked more conservatively compared to the Mistral predecessors. When the system just had one small core running on the A11, this would boost up to 1694MHz. This behaviour is now gone on the A12, and the clock maximum clock is 1587MHz. The frequency further slightly reduces to down to 1538MHz when there’s four small cores fully loaded.
Much improved memory latency
As mentioned in the previous page, it’s evident that Apple has put a significant amount of work into the cache hierarchy as well as memory subsystem of the A12. Going back to a linear latency graph, we see the following behaviours for full random latencies, for both big and small cores:
The Vortex cores have only a 5% boost in frequency over the Monsoon cores, yet the absolute L2 memory latency has improved by 29% from ~11.5ns down to ~8.8ns. Meaning the new Vortex cores’ L2 cache now completes its operations in a significantly fewer number of cycles. On the Tempest side, the L2 cycle latency seems to have remained the same, but again there’s been a large change in terms of the L2 partitioning and power management, allowing access to a larger chunk of the physical L2.
I only had the test depth test up until 64MB and it’s evident that the latency curves don’t flatten out yet in this data set, but it’s visible that latency to DRAM has seen some improvements. The larger difference of the DRAM access of the Tempest cores could be explained by a raising of the maximum memory controller DVFS frequency when just small cores are active – their performance will look better when there’s also a big thread on the big cores running.
The system cache of the A12 has seen some dramatic changes in its behaviour. While bandwidth is this part of the cache hierarchy has seen a reduction compared to the A11, the latency has been much improved. One significant effect here which can be either attributed to the L2 prefetcher, or what I also see a possibility, prefetchers on the system cache side: The latency performance as well as the amount of streaming prefetchers has gone up.
Instruction throughput and latency
Backend Execution Throughput and Latency | ||||||||
Cortex-A75 | Cortex-A76 | Exynos-M3 | Monsoon | Vortex | |||||
Exec | Lat | Exec | Lat | Exec | Lat | Exec | Lat | |
Integer Arithmetic ADD |
2 | 1 | 3 | 1 | 4 | 1 | 6 | 1 |
Integer Multiply 32b MUL |
1 | 3 | 1 | 2 | 2 | 3 | 2 | 4 |
Integer Multiply 64b MUL |
1 | 3 | 1 | 2 | 1 (2x 0.5) |
4 | 2 | 4 |
Integer Division 32b SDIV |
0.25 | 12 | 0.2 | < 12 | 1/12 - 1 | < 12 | 0.2 | 10 | 8 |
Integer Division 64b SDIV |
0.25 | 12 | 0.2 | < 12 | 1/21 - 1 | < 21 | 0.2 | 10 | 8 |
Move MOV |
2 | 1 | 3 | 1 | 3 | 1 | 3 | 1 |
Shift ops LSL |
2 | 1 | 3 | 1 | 3 | 1 | 6 | 1 |
Load instructions | 2 | 4 | 2 | 4 | 2 | 4 | 2 | |
Store instructions | 2 | 1 | 2 | 1 | 1 | 1 | 2 | |
FP Arithmetic FADD |
2 | 3 | 2 | 2 | 3 | 2 | 3 | 3 |
FP Multiply FMUL |
2 | 3 | 2 | 3 | 3 | 4 | 3 | 4 |
Multiply Accumulate MLA |
2 | 5 | 2 | 4 | 3 | 4 | 3 | 4 |
FP Division (S-form) | 0.2-0.33 | 6-10 | 0.66 | 7 | >0.16 | 12 | 0.5 | 1 | 10 | 8 |
FP Load | 2 | 5 | 2 | 5 | 2 | 5 | ||
FP Store | 2 | 1-N | 2 | 2 | 2 | 1 | ||
Vector Arithmetic | 2 | 3 | 2 | 2 | 3 | 1 | 3 | 2 |
Vector Multiply | 1 | 4 | 1 | 4 | 1 | 3 | 3 | 3 |
Vector Multiply Accumulate | 1 | 4 | 1 | 4 | 1 | 3 | 3 | 3 |
Vector FP Arithmetic | 2 | 3 | 2 | 2 | 3 | 2 | 3 | 3 |
Vector FP Multiply | 2 | 3 | 2 | 3 | 1 | 3 | 3 | 4 |
Vector Chained MAC (VMLA) |
2 | 6 | 2 | 5 | 3 | 5 | 3 | 3 |
Vector FP Fused MAC (VFMA) |
2 | 5 | 2 | 4 | 3 | 4 | 3 | 3 |
To compare the backend characteristics of Vortex, we’ve tested the instruction throughput. The backend performance is determined by the amount of execution units and the latency is dictated by the quality of their design.
The Vortex core looks pretty much the same as the predecessor Monsoon (A11) – with the exception that we’re seemingly looking at new division units, as the execution latency has seen a shaving of 2 cycles both on the integer and FP side. On the FP side the division throughput has seen a doubling.
Monsoon (A11) was a major microarchitectural update in terms of the mid-core and backend. It’s there that Apple had shifted the microarchitecture in Hurricane (A10) from a 6-wide decode from to a 7-wide decode. The most significant change in the backend here was the addition of two integer ALU units, upping them from 4 to 6 units.
Monsoon (A11) and Vortex (A12) are extremely wide machines – with 6 integer execution pipelines among which two are complex units, two load/store units, two branch ports, and three FP/vector pipelines this gives an estimated 13 execution ports, far wider than Arm’s upcoming Cortex A76 and also wider than Samsung’s M3. In fact, assuming we're not looking at an atypical shared port situation, Apple’s microarchitecture seems to far surpass anything else in terms of width, including desktop CPUs.
253 Comments
View All Comments
Andrei Frumusanu - Friday, October 5, 2018 - link
Hello all,This article is still fresh off the presses - I'm sure there's still typos/grammar to be fixed in the coming hours.
Just a general note; this is my first iPhone review, and hopefully it makes up for AnandTech not having a review of the iPhone 8's/X last year. Unfortunately that happened in a time when there was no mobile editor with the site anymore, and I only rejoined after a few months after that.
As always, feel free to reach out in the comments or per emails.
OMGitsShan - Friday, October 5, 2018 - link
As always, this is the iPhone review to wait for! Thank you Andreiversesuvius - Friday, October 5, 2018 - link
Potato.Jetcat3 - Friday, October 5, 2018 - link
Wonderful review Andrei! Just a few minor quibbles for the capacities of each li-ion battery.Xs is 2658 mAh
Xs Max is 3174 mAh
Keep up the good work!
wicasapa - Friday, October 5, 2018 - link
one thing was not clear in the display section ... the Xs display (OLED) uses more power on a black screen compared to 8's LCD at the same brightness?!! this doesn't sound right! or is this power draw attributable to something else (face detection sensors, etc.)?Andrei Frumusanu - Saturday, October 6, 2018 - link
No, that's accurate. It's part of why the new phones regress in battery over the LCD iPhones.DERSS - Saturday, October 6, 2018 - link
You asked about the display's matrix controlling logic. It was found out through patent research that Apple has developed LTPO TFT Technology for that, however it is so far is confirmed to only used in the displays LG Display makes for Apple Watch. For smartphone OLEDs Apple has older simpler technology designed for the transistor layer as these big OLEDs are already super pricy anyway. And additional manufacturers like LG Display where Apple has started to implement manufacturing of iPhone XS/Max displays only has very few Tokki Canon equipment sets so they can not really low the manufacturing pricing down. (And Apple can not move to a different equipment as it has designed iPhone X/XS/Max displays only for Tokki Canon equipment).wicasapa - Saturday, October 6, 2018 - link
I understand the regression on the battery test you run, it is basically browsing the web, which is by and large a white background and generally above the 60-65% threshold for the crossover of efficiency between OLEDs and LCDs. it actually makes sense for OLEDs to be less efficient in web browsing-based battery tests. however, I was surprised that you measured less efficiency on a pure black screen!Andrei Frumusanu - Sunday, October 7, 2018 - link
Actually the web test is varied, I have a few lower APL sites and even some black ones.tipoo - Sunday, October 14, 2018 - link
Common misconception - OLEDS don't just turn off fully for pure blacks, it takes active power to create a black. When they're off they're a murky grey.