Using a Local LLM – Putting it All Together

As I had previously explored, there is now a great potential for anyone using commodity hardware, even that which can be purchased at your local Costco, to enable sovereign AI capabilities. From simple code generation inside Visual Studio Code, to generating function-level code blocks based on prompts of desired functionality, to doing in-depth security analysis …

Using a Local LLM | Function-Level Code Generation and Accuracy

As part of investigating the use of a local LLM on commodity hardware (specifically on an RTX 5070 TI), I wanted to step away from the more standardized testing (e.g., LiveCodeBench) and do a highly-unscientific, gut-check on what to expect. I mean, in the real world, what do those scoring numbers actually translate to as …

Using a Local LLM | Visual Studio Integration – Code Completion with WSL

With AI-based code completion being the latest rage, I wanted to check out the basics of local capabilities that are available as an alternative to Cursor‘s hosted services. The primary prerequisite that I have is that it must have a native plugin for Visual Studio Code. This search brought be to Llama.VSCode, which (among other …