The Good (and limitations) of using a Local CoPilot with Ollama

Interactive code editors have been around for a while now, and tools like GitHub Copilot have woven their way into most development pipelines, and for good reason. They’re easy to use, exceptionally helpful (at certain tasks), and have undeniably made life as a developer smoother. Recently, I decided to switch away from relying on GitHub Copilot in favour of a local model for a few key reasons. While I don’t use it all the time, it has proven to be a useful option in many situations. In this blog post, I’ll go over why I made the switch, how I set it up, and share a bit about my experience so far.

Why?

There are plenty of cloud-based solutions available, and if you’re like me and do most of your work in VSCode, GitHub Copilot is probably your go-to. It’s easy to use, provides access to some of the best models, and, if you’re a student it’s free, making it an obvious choice. That said, there are several reasons why you might consider a local copilot instead.

For me, the biggest factor is privacy. When working on unpublished code, keeping everything local ensures that sensitive information never leaves my environment. Is this level of caution necessary? Maybe not. But I’d rather err on the side of caution. Performance is another key reason. Small models have improved significantly, and good models have become much more lightweight. Not long ago, running a decent copilot required substantial computing power, but now, my MacBook Air is more than capable of handling a local model. Is it the best option available? No, but for what I need from a copilot, it’s more than enough.

Control is another advantage. A local setup allows me to customise and configure my tools to fit my specific workflow without relying on external servers. Finally, there’s the benefit of offline access. With a local copilot, I’m not dependent on an internet connection, which means I can keep working efficiently anywhere, even when traveling.

How?

In general, I like to keep thing simple, my tools of choice are Ollama and Continue. They are easy to set up, and well documented with easy installation.

My Experience…

My experience using a local AI-powered coding assistant has been largely positive, but with some caveats. For certain tasks, such as simple function creation, error management, documentation, best practices, efficiency optimisation, and learning, these tools are fantastic. These aren’t groundbreaking challenges, nor do they require deep problem-solving, but they are essential parts of development that need to be done well. In these cases, Continue and any reasonable LLM make my life noticeably easier, they speed up repetitive tasks, provide quick suggestions, and are generally easy to verify. However, for more complex tasks, larger project-level challenges, or niche library integration, I’ve found that they require much more careful prompting to be useful. In many cases, the time spent crafting a prompt that yields a valuable response outweighs the benefits, making traditional methods more efficient. Overall, these tools are a great addition to my workflow, a helpful assistant rather than a replacement. While they aren’t quite reliable enough for serious, high-stakes projects, they are still a valuable option to have, making everyday coding smoother and more enjoyable.

Author