Table of Contents
Have you ever wanted an AI that doesn’t just give generic answers? If you want to train local AI model Ollama to speak your language, you’ve come to the right place.
What if you could have a personal AI mentor that explains complex coding concepts with a sense of humor, all while running entirely on your laptop?
In this guide, we’ll walk through how to use Ollama to create a custom AI model. No cloud GPUs, no subscription fees, and no internet required once you’re set up.
Why Train local AI model Ollama?
Standard models like Phi-3 or Llama 3 are great, but they are “generalists.” By creating a custom Modelfile, you can:
- Customize Personality: Make it talk like a teacher, a developer, or even a comedian.
- Set the Tone: Control how creative or factual the responses are.
- Privacy: Since it runs locally, your data never leaves your machine.
Get Started With Ollama
A quick guide to configure ollama on your local machine.
Prerequisites
Before we start, ensure you have Ollama installed on your system and at least one base model downloaded (this tutorial uses phi3:mini).
Step 1: Create Your Custom Modelfile
The “Modelfile” is the blueprint for your AI. Think of it as a wrapper that adds instructions on top of an existing model.
- Open a text editor (like Notepad or VS Code).
- Add the following configuration:
# Specify the base model FROM phi3:mini # Set the creativity level (0.0 = Robotic, 1.0 = Creative/Risky) PARAMETER temperature 0.8 # Define the AI's personality SYSTEM """ You are AmplifyAbhi, a tech mentor who explains coding and AI in a very funny way. Always use simple examples and try to make the user laugh while they learn. """
Understanding “Temperature”
- 0.0: The model becomes predictable and robotic. It will give the same answer every time.
- 1.0: The model becomes highly creative and unpredictable—perfect for brainstorming or story writing.
- 0.8: This is a “sweet spot” used in the video to balance accuracy with a bit of flair. [02:39]
Step 2: Build Your Model
Once you have saved your file (e.g., as AmplifyAbhiModel), open your terminal and navigate to the folder where the file is saved. Run the following command:
ollama create AmplifyAbhi -f AmplifyAbhiModel
If successful, you will see a success message. You can verify your new model is ready by typing ollama list. You’ll see your custom model listed alongside the original base models. [06:41]
Step 3: Run and Test Your AI
Now it’s time to talk to your creation! Run your model using:
ollama run AmplifyAbhi
In the video, when asked “Who are you?”, the model responded not as a generic AI, but as a “personal jester” for tech, staying true to the funny persona defined in the System prompt. [07:58]
Conclusion
You don’t need a massive server farm to build your own version of AI. By using Ollama and a simple Modelfile, you can fine-tune the “vibe” and behavior of an AI to suit your specific needs.
What kind of AI will you build? A strict coding tutor? A poetic assistant? Let us know in the comments!
Useful Parameters
Max tokens
Controls response length.
PARAMETER num_predict 512
Context window
How much memory/context the model remembers during chat.
Higher = more RAM usage.
PARAMETER num_ctx 4096
Top P
Controls word selection diversity.
PARAMETER top_p 0.9
Usually keep:
0.8–0.95
Repeat penalty
Prevents repetitive answers.
PARAMETER repeat_penalty 1.1
Where model is saved?
Ollama stores models locally on your machine.
Usually:
macOS
~/.ollama/models
Windows
C:\Users\<username>\.ollama\models
Linux
/usr/share/ollama/.ollama/models
Best practice for your use case
Since you’re creating AI/tutorial content:
You can create multiple personalities:
amplifyabhi-teacher amplifyabhi-funny amplifyabhi-shorts amplifyabhi-interviewer
Important thing many people misunderstand
You are NOT retraining the AI.
You are:
- wrapping the base model
- controlling behavior
- steering responses
Real training/fine-tuning is a different process involving:
- datasets
- GPUs
- LoRA/fine-tuning
- epochs
- weights
Your current setup is:
- lightweight
- fast
- local
- practical
which is actually perfect for creators and developers.
Train local AI model Ollama : Watch the Full Tutorial
For a visual walkthrough of this process, check out the full video by Amplifyabhi
Explore More AI & Development Tutorials :
Hope you found Train local AI model Ollama guide interesting suggestiong you to browse through AI tutorials that help you master AI integration and modern mobile development.