Bug Fix·Pushed May 1, 2026·S
LLM summary validation relaxed for large outputs
Commit summaries generated by AI for large pull requests were being rejected in CI because they exceeded character limits enforced by the code—limits that weren't actually sent to the model in the first place. The fix stops the double-validation that's been blocking merges.
When developers merged feature branches containing substantial work, the AI-generated commit summaries were mysteriously failing CI validation. The root cause: the code was stripping length constraints from the JSON schema sent to the MiniMax model, then re-validating the response against the original bounded schema. Summaries exceeding story.max(2000) or technicalDescription.max(3000) characters were rejected—even though the model never knew those limits existed.
The fix removes the strict post-hoc validation. LangChain's built-in structured-output validation is trusted instead, matching the pattern used elsewhere in the codebase. The schema bounds remain as soft guidance via field descriptions, but they no longer cause rejections after the fact. Large commits that previously failed now go through, which matters most when the AI-generated summaries are most valuable—the substantive ones covering actual feature work.
Technical description
In [[code ref=1]]action/src/llm.ts[[/code]], the [[code]]createSummarizer[[/code]] function previously called [[code]]ChangesNodeOutputSchema.parse(response.parsed)[[/code]] to re-validate LLM outputs against a strict Zod schema with min/max character bounds. This worked when the same schema was sent to the model, but broke for MiniMax because [[code]]stripJsonSchemaConstraints[[/code]] removes those bounds before the schema reaches the model.
The fix replaces the strict Zod parse with a direct cast: [[code]]return response.parsed as ChangesNodeOutput[[/code]]. LangChain's structured output mechanism handles validation at the model level—the schema with soft bounds still provides guidance through field descriptions, but post-hoc enforcement is dropped.
````diff
file=action/src/llm.ts
- return ChangesNodeOutputSchema.parse(response.parsed) as ChangesNodeOutput;
+ return response.parsed as ChangesNodeOutput;
````
This aligns with gitsky's [[code]]invokeStructured[[/code]] pattern, which trusts the model's output directly. The trade-off is intentional: slightly longer summaries may be accepted, but the system no longer rejects valid outputs that happened to exceed arbitrary limits not communicated to the model.
Files at a Glance:
- [[code]]action/src/llm.ts[[/code]]: Removed strict Zod re-validation, now trusts LangChain's structured output
Categories
- Bug Fix (85%) — Fixes Zod validation errors causing CI failures on large commits when using MiniMax LLM path
- Maintenance (15%) — Simplifies validation logic by removing redundant post-hoc parsing