• Published on

    Yi-Coder, a new open-source code LLM series, has been released by 01.ai.

    Available in 1.5B and 9B parameter versions, Yi-Coder offers base and chat models for efficient inference and flexible training.

    The 9B version, built on Yi-9B, incorporates an additional 2.4T high-quality tokens from GitHub and filtered CommonCrawl data. Key features include continued pretraining on 52 major programming languages, a 128K token context window, and impressive performance metrics.

    Yi-Coder-9B-Chat achieved a 23.4% pass rate on LiveCodeBench, outperforming larger models.

    It also excelled in code editing, completion, and mathematical reasoning tasks, demonstrating capabilities comparable to or surpassing models with significantly more parameters.