Skip to main content

Command Palette

Search for a command to run...

Old hardware, New (AI) problems

Updated
1 min read
Old hardware, New (AI) problems

What do we say to buying bleeding edge hardware for running AI workloads?

Not today! I have an old HP Z600 (2009!) and GPU that I wanted to use to run #Kubernetes, #Ollama, Open WebUI, and utilize NVIDIA’s gpu-operator. It has been a solid machine through the years with dual socket Xeons, loads of ECC ram, and simply wont quit. It has run several hypervisors, OpenStack, OpenShift, and more! When I decided to plug in a GPU, and load up my AI stack, I had no idea the rabbit hole I would go down. Here is the short story; Ollama’s GPU runner by default uses the AVX instruction set which is not available in old CPUs. I briefly thought it was time to retire my old machine and buy something a little newer, but no! The kind Ollama devs added a build argument in their Dockerfile --build-arg CUSTOM_CPU_FLAGS= . Leaving the flag’s values empty builds the GPU runner without AVX allowing my beloved Z600 to live on, continuing to serve modern workloads.

Moral of the story? With a little ingenuity (and a helpful open-source community), old hardware can still punch above its weight in the AI era!

Old hardware, New (AI) problems