Hosting AI Part 1
This page contains public notes on my journey hosting and using local AI.
Setting up AI Models
To try out models for coding, I installed DeekSeek Coder and Qwen via Ollama.
To install Ollama on my Linux desktop:
On other operating systems, you can install Ollama from this link.
To install the two models that I chose:
At this point, I can run them locally to try out a quick chat interface.
VS Code Setup
Setting up the two models in Visual Studio Code is actually quite easy; thanks to an extension called Continue.
Once installed, we just need to edit the config file. We just need to add the models that we installed.
At this point, we should be all set to go! You should be able to open a new chat, and see that you have the models that you've installed.
Conclusion
I set up these two models, but you can use any models that you like, and it's that easy to switch between them in the VS Code Continue extension.
At the time of making this documentation, I've just set these up myself. This is all just for part 1 of this series. Now it's time to test how helpful these are for coding! Stay tuned for results and possible improvements!
Part of: YouTube Channel Notes