All About AI

AI — short for artificial intelligence — is one of the most exciting (and confusing) revolutions of our time. It’s suddenly everywhere: writing emails, chatting with you, creating art, even making business decisions.

Mastering the Art of AI Image Prompts: The Definitive Guide

From Words to Art: How to Craft Perfect Text-to-Image Prompts

🎯 Mastering the Art of AI Image Prompts: The Definitive Guide Text-to-image models are like genies with graphic tablets. But to get that perfect wish granted—be it a neon raccoon in a cyberpunk alley or a Renaissance-style cheeseburger portrait—you need to speak their language: the prompt. This guide is a follow-up to my intro to AI […]

From Words to Art: How to Craft Perfect Text-to-Image Prompts Read More »

Intro to Prompt Engineering: Unlocking the Full Power of LLMs

Intro to Prompt Engineering: Unlocking the Full Power of LLMs

🧠 The Power of Language Models (and Why Prompts Engineering Matter) Large language models (LLMs) like GPT-4, Claude, and Gemini aren’t magic, but they’re pretty close. They can write essays, debug code, summarize research, translate across languages, and even reason through logic puzzles. But here’s the catch: you only get great results if you know how

Intro to Prompt Engineering: Unlocking the Full Power of LLMs Read More »

A Beginner's Guide to the Buzz and the Brilliance of AI

A Beginner’s Guide to the Buzz and the Brilliance of AI

AI, short for artificial intelligence, is one of the most exciting (and confusing) revolutions of our time. It’s suddenly everywhere: writing emails, chatting with you, creating art, even making business decisions. But what is it really? And how did we get here? Let’s break it all down — clearly, simply, and with a few laughs along

A Beginner’s Guide to the Buzz and the Brilliance of AI Read More »

Running llama.cpp on the Jetson Nano

So you’ve built GCC 8.5, survived the make -j6 wait, and now you’re eyeing llama.cpp like it’s your ticket to local AI greatness on a $99 dev board.With a few workarounds (read: Git checkout, compiler gymnastics, and a well-placed patch), you can run a quantized LLM locally on your Jetson Nano with CUDA 10.2, GCC 8.5, and a Prayer. Will it

Running llama.cpp on the Jetson Nano Read More »

Scroll to Top