Information
Ollama is a tool that allows you to run and manage A.I. models locally on your own machines, providing privacy and control without relying on cloud-based services. While it works across multiple platforms, it integrates smoothly into any Linux workflow.
Installation
Installing is easy, it can be done in multiple ways, that I will show you here. In the video above my good friend and brother DistroTube goes through the motions on ArcoLinux an Arch-Based distro. Here, I will expanding on that just a little, while including most of the commands used.
First we will need to install the Ollama Engine…
- Platform-Agnostic (Recommended)
curl -fsSL https://ollama.com/install.sh | sh
- ArchLinux (All Spins)
sudo pacman -S ollama && sudo systemctl enable --now ollama
Now that we have the engine installed, it’s time to grab the model we want to use with it, since without one it will not work properly. There are quite a few varying in size, be mindful of that.
We can select model(s) from This Link, once we have it we use the following command to install and run (using llama3.1 in this example)…
ollama run llama3.1
Some models are larger than others, reaching up to 232GB !!! Also the bigger the model is the more powerful your machine should be to handle it. Please keep that in mind before diving too deep into it.
Oh and as mentioned by DT in the video, you can take it to the next level by customizing it, making it react using different personas, like Mario or any of the available ones. This is just a quirk not very useful. If you want to know how, check it out on project’s Github
Get up and running with Llama 3.1, Mistral, Gemma 2, and other large language models.
Wrap up
That’s it. There’s nothing to it. A.I. is where the world is headed so I thought I’d share this with y’all. It’s fun to try new things, keeping up with technology. If you have any questions or issues please report them to the Devs upstream.
Cheers !