Language model sizing is a hot topic and often an intimidating one for those with aspirations to build their own LLMs but lack the necessary hardware, such as GPUs.
For this reason, smaller, lighter versions of some of the mainstream LLMs have begun to emerge, such as the 3.8 billion parameter Phi4-mini-reasoning model, which requires 3.2 GB of hardware.
In a recent video, engineer Gary explains how to install it on a Raspberry Pi, and whilst the performance is nowhere near as fast as the heaviest-hitting LLMs, it remains perfectly functional.
quench your curiosity and try it out for yourself.