Vishal Vijayraghavan

Vishal Vijayraghavan

OpenSource | Linux | Python | Containers | Automation

22 Jan 2025

Local AI Coding Assistant

I came across GitHub Copilot and Cursor, and they’re truly amazing and feature-rich. However, I was concerned about running something locally due to data privacy. No worries, open-source always has an alternative! This time, it’s Ollama and Continue (VS Code plugin). Local AI coding assistants might seem intimidating, but with Ollama and the Continue VS Code extension, it’s surprisingly straightforward. In this blog, I will walk you through setting up a powerful, private coding companion right on your local machine.

Prerequisites

Before we begin, ensure you have:

  • A computer with at least basic system requirements
  • Visual Studio Code installed
  • Basic comfort with the terminal/command line

Step 1: Installing Ollama

Ollama is a tool designed to simplify the process of running open-source large language models (LLMs) directly on your computer. Run the following command to install Ollama on Linux, macOS, and Windows:

$ curl -fsSL https://ollama.com/install.sh | sh

Pulling Your First Code Model

$ ollama pull granite-code:3b

This is one of the available LLMs (large language models), but there are many more you can find at https://ollama.com/search

Step 2: Installing Continue VS Code Extension

  1. Open Visual Studio Code
  2. Go to the Extensions marketplace
  3. Search for “Continue” and install the extension
  4. Open the Continue extension settings and add the following to your config.json file. You can also find the config file at ~/.continue/config.json.
 {
  "models": [
    {
      "model": "granite-code:3b",
      "provider": "ollama",
      "title": "granite-code:3b"
    },
  ]
 }

How to use

  • Ctrl+L : Start the chat prompt.
  • Ctrl+I : Edit the highlighted code.

For more usage commands(like chat, autocomplete, edit, actions) refer to https://docs.continue.dev/getting-started/overview

Happy Coding :)

comments powered by Disqus