Using a Local LLM – Putting it All Together

As I had previously explored, there is now a great potential for anyone using commodity hardware, even that which can be purchased at your local Costco, to enable sovereign AI capabilities. From simple code generation inside Visual Studio Code, to generating function-level code blocks based on prompts of desired functionality, to doing in-depth security analysis …

Implementation of Retrieval-Augmented Generation (RAG) and LLMs for Documentation Parsing

A while ago I had the opportunity to explore the use of Retrieval-Augmented Generation (RAG) with the use of both internal and public LLMs, including various GPT and Llama models. The opportunities here are extensive, both for good, as well as harm if not implemented with proper oversight. Here’s a high level overview of that …