Progress on Blawx-MCP

Progress on exposing Blawx encodings to your coding agents

Jan. 25, 2026

I ran into an unexpected and interesting challenge while working on the MCP server for Blawx. Evidently agents (like Claude Code, ChatGPT, CoPilot, etc.) will take what they receive from tools and truncate that output in ways that are difficult to predict, because they depend on the amount of space that the context window has available under the circumstances, which in turn depends on things like how long the conversation has been and the LLM model being used.

That's means that MCP servers need to be built in such a way as to allow the agents to consume arbitrarily small portions of outputs. Making that work for Blawx necessitated implementing optional caching of reasoner responses over the API, and additions to the API for retrieving portions (and slices of portions) of cached results.

I have re-implemented the development Blawx-MCP server to take advantage of these changes to the API, and that seems to have resolved the issues with answer truncation.

It's not clear to me yet exactly how good the threshold coding agents are at using the Blawx-MCP tools. I'll share what I learn on that front here as I learn it. But we're not going to wait, just in case any other Pro subscribers want to try it out, and discover strengths and weaknesses before I do.

Blawx v2.0.3 with the API changes and a beta version of the Blawx-MCP server should both be available in the next day or so.