The video showcases the GMK Tech Evo X2, a compact mini PC with up to 128 GB of shared RAM and a powerful AMD Ryzen AI Max Plus 395 processor, designed for running large language models and AI workloads at a lower cost than upcoming Nvidia solutions. It highlights the device’s impressive performance, memory bandwidth, and versatility for AI and gaming, despite some software support challenges, making it a compelling option for AI developers and enthusiasts.
The video introduces the GMK Tech Evo X2, a mini PC equipped with 128 GB of shared RAM and a powerful AMD Ryzen AI Max Plus 395 processor, designed for running local large language models (LLMs) and machine learning tasks. While the highly anticipated Nvidia DJX Spark is still in development and may be released soon, GMK Tech has already launched this capable alternative at less than half the price. The device features a range of ports, including USB 4, USB-A, SD card reader, HDMI, display port, and 5 GB Ethernet, making it a versatile machine for AI and gaming applications.
At its core, the Evo X2 is powered by AMD’s Ryzen AI Max Plus 395, a 16-core, 32-thread processor with a maximum clock speed of 5.1 GHz, and an internal GPU comparable to a GeForce RTX 4060. It comes with 64 or 128 GB of LPDDR5X memory, shared with the GPU, and dual PCIe Gen 4 SSD slots supporting up to 16 TB of storage. The device also supports Wi-Fi 7 and is capable of handling modern gaming and AI workloads efficiently. The key feature is its static partitioning of shared memory, which differs from Apple’s unified memory architecture, allowing for optimized performance in running local LLMs.
The presenter conducts various benchmarks and tests to evaluate the device’s performance with different models, including 32-billion and 370-billion parameter models. The tests reveal that the Evo X2 can load and run large models effectively, with token processing speeds varying based on model size and offloading configurations. The device demonstrates impressive memory bandwidth, measured through industry-standard benchmarks like the stream benchmark, which shows it surpasses many other systems, including the M4 Mac Mini, in memory throughput. The tests also compare the device’s performance to other machines, highlighting its strengths and limitations.
Further, the video discusses the software ecosystem, particularly LM Studio and Llama CPP, which enable running LLMs on various hardware platforms. The presenter explains the differences between cross-platform libraries like Vulcan and AMD-specific support like Rockom, which is still in development. Despite some challenges with stability and software support, the Evo X2’s hardware shows promising potential for AI workloads, especially with upcoming support for Rockom on AMD chips. The video emphasizes that the device’s static memory partitioning offers stability and simplicity, though at the cost of some flexibility and memory efficiency.
Finally, the presenter evaluates the pricing and overall value of the Evo X2, noting its premium cost of $1,499 for the 64 GB version and around $2,000 for the 128 GB model. Despite its high price, the device is considered a powerful and compact solution for AI development and research, outperforming many alternatives like the Mac Mini in terms of memory bandwidth and core count. The video concludes with a positive outlook on the device’s future, especially as AMD improves support for Rockom and other software optimizations, making it a compelling choice for AI enthusiasts and developers seeking high-performance mini PCs.