Gabe | |
---|---|
![]() |
|
gabe@oct6.org | |
Location | Minneapolis, Minnesota |
Education | University of Minnesota, Bachelor of Science in Data Science |
Links | GitHub HuggingFace |
This article is about Gabriel Larson the data scientist. For other uses, see Gabriel (disambiguation).
Gabriel "Gabe" Larson is a machine learning engineer and data scientist based in Minneapolis, Minnesota. A graduate of the University of Minnesota's College of Science and Engineering, Larson's work encompasses both traditional data analytics and modern AI applications, with particular expertise in statistical modeling, deep learning frameworks, and database systems.
His technical contributions range from developing reinforcement learning agents and computer vision models to building infrastructure for semantic search and retrieval systems. An active participant in the open-source AI community, Larson focuses on making advanced machine learning techniques more accessible through tool development and model optimization. His current projects include work on retrieval-augmented generation (RAG) pipelines and browser-based development environments for AI-assisted programming.
Published optimized GGUF quantizations for state-of-the-art language models, achieving thousands of community downloads. Notable releases include Mistral Small 3.2, Nanonets OCR-S, DeepSeek R1 0528 Qwen 8B Distill, and Kimi Dev 72B. These quantizations enable efficient deployment of large language models on consumer hardware.
Developing a browser-based IDE specifically designed for "vibe coding" with LLMs. The platform aims to eliminate barriers to AI-assisted development by providing a fully web-based solution that requires no local setup. Built on experience with Qwen2.5-Coder and Devstral models using Roo and Cline extensions.
Implemented a privacy-preserving federated learning system for traffic flow prediction using PyTorch and the Flower framework, integrating multiple deep learning architectures (GRU, RNN, LSTM, STGODE) to analyze the PEMS traffic monitoring dataset. Achieved significant performance improvements with the STGODE model (98% reduction in prediction error compared to baseline approaches) while maintaining data privacy through distributed training across multiple nodes.
Engineered a retrieval-augmented generation pipeline for semantic search across thousands of embedded news articles. The system constructs accurate timelines for complex queries using vector embeddings and similarity search. Currently expanding to academic paper ingestion using custom Nanonets-OCR-S GGUF quantization.
Implemented and compared multiple reinforcement learning agents in a poker environment using Python and Keras; developed custom environment wrapper and reward functions to enable AI agents to learn betting strategies
Developed a LoRA (Low-Rank Adaptation) model for Stable Diffusion XL that captures the distinctive artistic style of 19th-century illustrator Gustave Doré. Created using a curated dataset of 4000 high-resolution images and BLIP-2 for automated labeling.
Developed a CLI utility that integrates PowerShell with local AI to provide instant analysis of command outputs, enabling rapid debugging and learning through natural language queries.
Gabe's resume can be downloaded as a pdf here.
Learning theoretical underpinnings of machine learning, and practical know-how to apply these methods to various problems and applications pertaining to machine learning and artificial intelligence
Development of probability and basic issues in statistics including probability spaces, random variables and their distributions and expected values, Law of large numbers, central limit theorem, and generating functions
Exploring fundamentals of computer vision, including: registration (optical flow, image alignment, tracking), recognition (bag of features, template matching, object proposal), reorganization (graph cuts, superpixel, semantic segmentation), and reconstruction (camera geometry, epipolar geometry, stereo)