Using a Local LLM | Function-Level Code Generation and Accuracy

As part of investigating the use of a local LLM on commodity hardware (specifically on an RTX 5070 TI), I wanted to step away from the more standardized testing (e.g., LiveCodeBench) and do a highly-unscientific, gut-check on what to expect. I mean, in the real world, what do those scoring numbers actually translate to as …

Implementation of Retrieval-Augmented Generation (RAG) and LLMs for Documentation Parsing

A while ago I had the opportunity to explore the use of Retrieval-Augmented Generation (RAG) with the use of both internal and public LLMs, including various GPT and Llama models. The opportunities here are extensive, both for good, as well as harm if not implemented with proper oversight. Here’s a high level overview of that …