乐闻世界logo
搜索文章和话题

What large language models does Ollama support and how to choose the right model?

2月19日 19:51

Ollama supports various open-source Large Language Models, mainly including:

1. Meta Llama Series:

  • llama2 - Llama 2 7B/13B/70B
  • llama3 - Llama 3 8B/70B
  • llama3.1 - Llama 3.1 8B/70B/405B
  • llama3.2 - Llama 3.2 1B/3B

2. Mistral AI Series:

  • mistral - Mistral 7B
  • mixtral - Mixtral 8x7B
  • mixtral:8x22b - Mixtral 8x22B

3. Google Gemma Series:

  • gemma - Gemma 2B/7B
  • gemma2 - Gemma 2 9B/27B

4. Code-Specific Models:

  • codellama - Code Llama 7B/13B/34B
  • deepseek-coder - DeepSeek Coder

5. Other Popular Models:

  • qwen - Qwen series
  • phi - Microsoft Phi series
  • gemma2:9b - Lightweight model
  • tinyllama - TinyLlama 1.1B

Model Selection Recommendations:

  1. General Conversation: Recommend llama3.1:8b or mistral:7b
  2. Code Generation: Recommend codellama:13b or deepseek-coder
  3. Lightweight Applications: Recommend phi:3.8b or gemma2:9b
  4. High-Quality Output: Recommend llama3.1:70b (requires more memory)

Model Version Specification:

bash
ollama run llama3.1:8b ollama run mistral:7b-instruct ollama run codellama:13b-python

View Available Models: Visit https://ollama.com/library to see all available models and their variants.

标签:Ollama