The GIGABYTE MZ72-HB0 (Rev 3.0) Motherboard Review: Dual Socket 3rd Gen EPYC
by Gavin Bonshor on August 2, 2021 9:30 AM EST- Posted in
- Motherboards
- AMD
- Gigabyte
- GIGABYTE Server
- Milan
- EPYC 7003
- MZ720-HB0
Board Features
The GIGABYTE MZ72-HB0 is an E-ATX motherboard and it is versatile in functionality due to its dual SP3 sockets designed for AMD EPYC 7003 and 7002 processors. It can be installed into a regular chassis with E-ATX support, but most system setups using this model will likely be in 1U chassis, which is designed for server and rack deployment. It has plenty of PCIe support, with five full-length PCIe 4.0 slots in total, which can operate at x16/x16/x16/x8/x8. For storage, there are four 7-pin SATA connectors, one PCIe 4.0 x4 M.2 slot, two SlimSAS PCIe 4.0 x4 NVMe slots, and three SlimSAS ports which can accommodate either twelve SATA ports or three additional PCIe 4.0 NVMe devices. Memory support includes sixteen memory slots (eight per socket), with support for DDR4-3200 or DDR4-2933 memory, and can accommodate up to a maximum of 4 TB of RDIMM, LRDIMM, and 3DS varieties.
For cooling, there's a total of six 4-pin headers available, including two for CPU coolers, and four for chassis fans. It does include a TPM 2.0 header for users wishing to run the Windows 11 operating system, but users will need to purchase an additional module to use this function as it doesn't come included in the packaging.
GIGABYTE MZ72-HB0 Rev 3.0 E-ATX Motherboard | |||
Warranty Period | 3 Years | ||
Product Page | Link | ||
Price | $1060 | ||
Size | E-ATX | ||
CPU Interface | AMD SP3 | ||
Chipset | AMD EPYC Gen 3 | ||
Memory Slots (DDR4) | Sixteen DDR4 Supporting 2TB per socket Octa-Channel LRDIMM/RDIMM/3DS Up to DDR4-3200 |
||
Video Outputs | 1 x D-Sub (ASPEED) | ||
Network Connectivity | 2 x Broadcom BCM57416 10 GbE Base-T 1 x Management LAN (ASPEED) |
||
Onboard Audio | N/A | ||
PCIe Slots for Graphics (from CPU) | 3 x PCIe 4.0 (x16/x16/x16) 2 x PCIe 4.0 (x8/x8) |
||
PCIe Slots for Other (from PCH) | N/A | ||
Onboard SATA | 4 x 7-pin SATA 3 x SlimSAS (12 x SATA) |
||
Onboard M.2 | 1 x PCIe 4.0 x4 2 x NVMe (SlimSAS 4i) 3 x PCIe 4.0 x4 (via SlimSAS) |
||
TPM 2.0 | Header (Optional TPM 2.0 kit available) | ||
Thunderbolt 4 (40 Gbps) | N/A | ||
USB 3.2 (20 Gbps) | N/A | ||
USB 3.2 (10 Gbps) | N/A | ||
USB 3.1 (5 Gbps) | 2 x USB Type-A (Rear panel) 2 x USB Type-A (One header) |
||
USB 2.0 | 2 x USB Type-A (One header) | ||
Power Connectors | 1 x 24-pin Motherboard 2 x 8-pin CPU |
||
Fan Headers | 2 x 4-pin CPU 1 x 4-pin Chassis |
||
IO Panel | 2 x USB 3.0 Type-A 1 x RJ45 (ASPEED) 2 x RJ45 (Broadcom) 1 x Serial COM UID button with LED |
Some of the connectivity options available include two 10 GbE ports which are controlled by a Broadcom BCM57416 controller, while USB options are limited to two USB 3.0 Type-A on the rear panel, and two USB 3.0 Type-A and two USB 2.0 ports available from internal headers. The MZ72-HB0 does include BMC functionality, which is delivered by an ASPEED BMC controller and includes a Realtek RTLS5411E Management LAN port and a D-sub video output. For server and rack deployment, there's a UID button that includes a functional LED.
Test Bed
With some of the nuances with Intel's Rocket Lake processors, our policy is to see if the system gives an automatic option to increase the power limits of the processor. If it does, we select the liquid cooling option. If it does not, we do not change the defaults. Adaptive Boost Technology is disabled by default.
Test Setup | |||
Processor | 2 x AMD EYPC 7763, 280 W, $7890 64 Cores, 128 Threads 2.45 GHz (3.4 GHz Turbo) |
||
Motherboard | GIGABYTE MZ72-HB0 Rev 3.0 (BIOS 12.50.09) | ||
Cooling | 2 x Noctua NH-U14S TR4-SPM | ||
Power Supply | EVGA 1600 T2 80+ Titanium 1600 W | ||
Memory | Micron 512 GB DDR4-3200 CL 22 (16 x 32 GB) | ||
Video Card | N/A | ||
Hard Drive | Crucial MX300 1TB | ||
Case | Open Testbed | ||
Operating System | Windows 10 Pro 64-bit: Build 20H2 |
We must also thank the following:
28 Comments
View All Comments
tygrus - Monday, August 2, 2021 - link
There are not many apps/tasks that make good use of more than the 64c/128t. Some of those tasks are better suited for GPU, accelerators or a cluster of networked systems. Some tasks just love having the TB's RAM while others will be limited by data IO (storage drives, network). YMMV. Have fun with testing it but it will be interesting to find people with real use cases that can afford this.questionlp - Monday, August 2, 2021 - link
Being capable of handling more than 64c/128t across two sockets doesn't mean that everyone will drop more than that on this board. You can install two higher clock 32c/64t processors into each socket, have shed load of RAM and I/O for in-memory databases, software-defined (insert service here) or virtualization (or a combination of those).Installer lower core count, even higher clock speed CPUs and you have yourself an immensely capable platform for per-core licensed enterprise database solutions.
niva - Wednesday, August 4, 2021 - link
You can but why would you when you can get a system where you can slot a single CPU with 64C?This is a board for the cases where 64C is clearly not enough, and really catering towards server use, for cases where less cores but more power per core are needed, there are simply better options.
questionlp - Wednesday, August 4, 2021 - link
The fastest 64c/128t Epyc CPU right now as a base clock of 2.45 GHz (7763) while you can get 2.8 GHz with a 32c/128t 7543. Slap two of those on this board, you'll get a lot more CPU power than a single 64c/128t and double the number of memory channels.Another consideration is licensing. IIRC, VMware per-CPU licensing maxes out at 32c per socket. To cover a single 64c Epyc, you would end up with the same license count as two 32c Epyc configuration. Some customers were grandfathered in back in 2020; but, that's no longer the case for new licenses. Again, you can scale better with 2 CPU configuration than 1 CPU.
It all depends on the targeted workload. What may work for enterprise virtualization won't work for VPC providers, etc.
linuxgeex - Monday, August 2, 2021 - link
The primary use case is in-memory databases and/or high-volume low-latency transaction services. The secondary use case is rack unit aggregation, which is usually accomplished with virtualisation. ie you can fit 3x as many 80-thread high performance VPS into this as you can into any comparably priced Intel 2U rack slot, so this has huge value in a datacenter for anyone selling such a VPS in volume.logoffon - Monday, August 2, 2021 - link
Was there a revision 2.0 of this board?Googer - Tuesday, August 3, 2021 - link
There is a revision 3.0 of this board.MirrorMax - Friday, August 27, 2021 - link
No and more importantly this is exactly the same board as rev1 but with a Rome/Milan bios, so you can bios update rev1 boards to rev3 basically, odd that the review doesn't touch on thisBikeDude - Monday, August 2, 2021 - link
Task Manager screenshot reminded me of Norton Speed Disk; We now have more CPUs than we had disk clusters back in the day. :PWaltC - Monday, August 2, 2021 - link
In one place you say it took 2.5 minutes to post, in another place you say it took 2.5 minutes to cold boot into Win10 pro. I noticed you used a Sata 3 connector for your boot drive, apparently, and I was reminded of booting Win7 from a Sata3 7200rpm platter drive taking me 90-120 seconds to cold boot--in Win7 the more crowded your system with 3rd-party apps and games the longer it took to boot...;) (That's not the case with Win10/11, I'm glad to say, as with TB's of installed programs I still cold boot in ~12 secs from an NVMe OS partition.) Basically, servers are not expected to do much in the way of cold booting as up time is what most customers are interested in...but I doubt the S3 drive had much to do with the 2.5 minute cold-boot time, though. An NVMe drive might have shaved a few seconds off the cold-boot, but that's about it, imo.Interesting read! Enjoyed it. Yes, the server market is far and away different from the consumer markets.