CodeGPT + WSL and Ollama: Complete Setup Guide | CodeGPT

Developers today are leveraging the power of AI to enhance their coding experience, and few combinations are as promising as integrating CodeGPT with WSL (Windows Subsystem for Linux) and Ollama. Whether you’re aiming to write code faster, generate context-aware suggestions, or run language models seamlessly on your local machine, setting up this intelligent developer environment can supercharge your productivity.

This guide provides a comprehensive walkthrough on how to configure CodeGPT with WSL and Ollama — a powerful machine learning model runner designed to work with local LLMs like Code Llama, Mistral, or LLaMA 2. Once configured properly, you’ll be coding with an AI assistant that understands your environment natively.

What is CodeGPT?

CodeGPT is an AI-powered coding assistant that integrates with your development environment, typically via Visual Studio Code. It uses advanced language models to understand and generate code, offering suggestions, debugging help, and even code explanations. While it traditionally relies on cloud-based models, the evolving trend is toward running these models locally — and that’s where Ollama comes in.

Why Use WSL and Ollama?

WSL gives Windows users access to a real Linux environment without needing a dual boot or VM. This makes it easier to run certain development tools and models that are typically Linux-native. Ollama enables developers to run large language models directly on their machines, ensuring low-latency and full data privacy.

Combining these with CodeGPT allows you to create a supercharged, AI-assisted, privacy-conscious, cross-platform development setup.

Prerequisites

Before diving into the setup, make sure your machine meets the following requirements:

  • Windows 10 or 11 (with WSL 2 enabled)
  • Visual Studio Code with the CodeGPT extension installed
  • At least 16GB RAM and a modern CPU
  • Optional: GPU for faster model inference

Step-by-Step Setup Guide

1. Install WSL

If you haven’t already, install WSL 2 by running this command in PowerShell (as administrator):

wsl --install

Choose your favorite Linux distribution (like Ubuntu 22.04) when prompted.

2. Install Ollama Inside WSL

Once WSL is set up, launch your Linux terminal and install Ollama using the official installation script:

curl -fsSL https://ollama.com/install.sh | sh

Ollama will now be available within your WSL environment.

Verify the installation by running:

ollama --version

3. Run a Model Locally with Ollama

Next, pull a model like Code Llama:

ollama run codellama

This will download the model (can take a few minutes) and set it up for local use. Once downloaded, the model will start and stay active for CodeGPT to reference.

4. Configure CodeGPT to Use Localhost API

In Visual Studio Code, go to the CodeGPT extension settings. Change the API endpoint to point to your local model:

http://localhost:11434/api/generate

Make sure that your model endpoint supports the OpenAI-compatible API used by CodeGPT. Ollama offers this via a compatibility layer.

5. Test the Integration

Now, try asking CodeGPT to write a function or fix a bug. It should generate responses quickly and your requests won’t leave your machine!

Troubleshooting Tips

  • Model Not Responding? Ensure Ollama is running and listening on the appropriate port.
  • VS Code Fails to Connect? Check that Windows and WSL can communicate, and that no firewall is blocking your localhost.
  • Performance is Slow? Consider using a lighter model or ensuring your system has enough RAM and CPU cores allocated to WSL.

Benefits of This Setup

This local setup brings several key advantages:

  • Privacy: Your code and prompts never leave your machine.
  • Speed: No waiting on cloud latency or rate limits.
  • Offline Use: Keep coding even without internet access.
  • Custom Models: Easily switch or fine-tune LLMs via Ollama.

Conclusion

Bringing together CodeGPT, WSL, and Ollama delivers a powerful, intelligent coding environment fully under your control. Whether you’re interested in maximizing privacy, boosting performance, or just exploring the future of local AI development, this configuration is a game changer.

With just a bit of setup, you’ll have a productive development assistant running entirely on your local device – no unnecessary API keys, subscriptions, or limitations.

Happy coding with AI!