Command Palette
Search for a command to run...
IQuest-Coder-V1 Technical Report
IQuest-Coder-V1 Technical Report
Abstract
In this report, we introduce the IQuest-Coder-V1 series-(7B/14B/40B/40B-Loop), a new familyof code large language models (LLMs). Moving beyond static code representations, we pro-pose the code-flow multi-stage training paradigm, which captures the dynamic evolution ofsoftware logic through different phases of the pipeline. Our models are developed through theevolutionary pipeline, starting with the intial pre-training consisting of code facts, repository,and completion data. Following that, we implement a specialized mid-training stage that inte-grates reasoning and agentic trajectories in 32k-context and repository-scale in 128k-context toforge deep logical foundations. The models are then finalized with post-training of specializedcoding capabilities, which is bifurcated into two specialized paths: the thinking path (utilizingreasoning-driven RL) and the instruct path (optimized for general assistance). IQuest-Coder-V1achieves state-of-the-art performance among competitive models across critical dimensions ofcode intelligence: agentic software engineering, competitive programming, and complex tooluse. To address deployment constraints, the IQuest-Coder-V1-Loop variant introduces a recur-rent mechanism designed to optimize the trade-off between model capacity and deploymentfootprint, offering an architecturally enhanced path for efficient efficacy-efficiency trade-off.We believe the release of the IQuest-Coder-V1 series, including the complete white-box chainof checkpoints from pre-training bases to the final thinking and instruct models, will advanceresearch in autonomous code intelligence and real-world agentic systems.