Running llama.cpp on the Jetson Nano
So you’ve built GCC 8.5, survived the make -j6 wait, and now you’re eyeing llama.cpp like it’s your ticket to local AI greatness on a $99 dev board.With a few workarounds (read: Git checkout, compiler gymnastics, and a well-placed patch), you can run a quantized LLM locally on your Jetson Nano with CUDA 10.2, GCC 8.5, and a Prayer. Will it […]
Running llama.cpp on the Jetson Nano Read More »