llama.cpp with Vulkan for local LLMs on AMD | MEMBERS