Build with Local AI

Run AI Locally is your go-to resource for tutorials, tips, and guides on running artificial intelligence models directly on your own devices. Discover how to deploy machine learning and deep learning models offline, optimize local inference, and leverage edge computing for faster, private, and cost-effective AI applications — no cloud required.

Cartoon-style illustration of a young film director sitting behind a computer monitor with headphones, clapperboard, and studio light in a colorful setting

Generating Video with AI — Locally: The Next Frontier Is Already Here

Just when we thought AI-generated text and images were wild enough, we’re now entering a new chapter: text-to-video generation. But here’s the twist — it’s no longer just a cloud-only, GPU-farm kind of thing. We’re starting to see early tools that can run locally, on your own machine, turning short prompts or still images into motion. […]

Generating Video with AI — Locally: The Next Frontier Is Already Here Read More »

Running llama.cpp on the Jetson Nano

So you’ve built GCC 8.5, survived the make -j6 wait, and now you’re eyeing llama.cpp like it’s your ticket to local AI greatness on a $99 dev board.With a few workarounds (read: Git checkout, compiler gymnastics, and a well-placed patch), you can run a quantized LLM locally on your Jetson Nano with CUDA 10.2, GCC 8.5, and a Prayer. Will it

Running llama.cpp on the Jetson Nano Read More »

Scroll to Top