- Published on
Meta has released Code Llama, a large language model specialising in coding that it claims outperforms other publicly available options on code-related tasks.
Based on Meta’s LLM Llama 2 and built on a foundation of 500 billion code-related training tokens, Code Llama can generate code from prompts, as well as offering code completion and debugging support for a range of popular programming languages.
Three versions are being released, with 7, 13 and 34 billion parameters, with the 7B and 13B models also including “fill-in-the-middle” capabilities to support code completion.
While Meta recommends the use of Code Llama to assist software engineers, the company also warns developers not to use it for general natural language tasks, noting it is “not designed to follow natural language instructions”.