書生大模型全鏈路開源體系

書生大模型包括一系列開源的高質(zhì)量的大模型,同時(shí)為了便利開發(fā)者和使用者,提供了全棧的開發(fā)工具。
GitHub地址為:https://github.com/internLM/
書生大模型全鏈路開源體系總共分三大部分

書生大模型全鏈路開源體系

模型

  • InternLM: 一系列的基礎(chǔ)大模型和對(duì)話大模型.
  • InternLM-Math: 強(qiáng)大的、專業(yè)的數(shù)學(xué)大模型.
  • InternLM-XComposer: 基于InternLM開發(fā)的,支持視覺-語言復(fù)合數(shù)據(jù)的大模型.

InternLM

GitHub地址:https://github.com/InternLM/InternLM
書生基礎(chǔ)大模型有以下版本

InternLM3

Model Transformers ModelScope Modelers Release Date
InternLM3-8B-Instruct internlm3_8B_instruct internlm3_8b_instruct Open in Modelers 2025-01-15

InternLM2.5

Model Transformers(HF) ModelScope(HF) Release Date
InternLM2.5-1.8B ??internlm2_5-1_8b internlm2_5-1_8b 2024-08-05
InternLM2.5-1.8B-Chat ??internlm2_5-1_8b-chat internlm2_5-1_8b-chat 2024-08-05
InternLM2.5-7B ??internlm2_5-7b internlm2_5-7b 2024-07-03
InternLM2.5-7B-Chat ??internlm2_5-7b-chat internlm2_5-7b-chat 2024-07-03
InternLM2.5-7B-Chat-1M ??internlm2_5-7b-chat-1m internlm2_5-7b-chat-1m 2024-07-03
InternLM2.5-20B ??internlm2_5-20b internlm2_5-20b 2024-08-05
InternLM2.5-20B-Chat ??internlm2_5-20b-chat internlm2_5-20b-chat 2024-08-05

InternLM-Math

InternLM2-Math-Plus

GitHub地::https://github.com/InternLM/InternLM-Math

Model Model Type Transformers(HF) ModelScope Release Date
InternLM2-Math-Plus-1.8B Chat ??internlm/internlm2-math-plus-1_8b Shanghai_AI_Laboratory/internlm2-math-plus-1_8b 2024-05-27
InternLM2-Math-Plus-7B Chat ??internlm/internlm2-math-plus-7b Shanghai_AI_Laboratory/internlm2-math-plus-7b 2024-05-27
InternLM2-Math-Plus-20B Chat ??internlm/internlm2-math-plus-20b Shanghai_AI_Laboratory/internlm2-math-plus-20b 2024-05-27
InternLM2-Math-Plus-Mixtral8x22B Chat ??internlm/internlm2-math-plus-mixtral8x22b Shanghai_AI_Laboratory/internlm2-math-plus-mixtral8x22b 2024-05-27

InternLM2-Math-Base

Model Model Type Transformers(HF) ModelScope Release Date
InternLM2-Math-Base-7B Base ??internlm/internlm2-math-base-7b internlm2-math-base-7b 2024-01-23
InternLM2-Math-Base-20B Base ??internlm/internlm2-math-base-20b internlm2-math-base-20b 2024-01-23
InternLM2-Math-7B Chat ??internlm/internlm2-math-7b internlm2-math-7b 2024-01-23
InternLM2-Math-20B Chat ??internlm/internlm2-math-20b internlm2-math-20b 2024-01-23

InternLM-XComposer

書生.浦語-靈筆
GitHub地址:https://github.com/InternLM/InternLM-XComposer

Model Usage Transformers(HF) ModelScope(HF) Release Date
InternLM-XComposer-2.5 Video Understanding, Multi-image Multi-tune Dialog, 4K Resolution Understanding, Web Craft, Article creation, Benchmark ??internlm-xcomposer2.5 internlm-xcomposer2.5 2024-07-03
InternLM-XComposer2-4KHD 4K Resolution Understanding, Benchmark, VL-Chat ??internlm-xcomposer2-4khd-7b internlm-xcomposer2-4khd-7b 2024-04-09
InternLM-XComposer2-VL-1.8B Benchmark, VL-Chat ??internlm-xcomposer2-vl-1_8b internlm-xcomposer2-vl-1_8b 2024-04-09
InternLM-XComposer2 Text-Image Composition ??internlm-xcomposer2-7b internlm-xcomposer2-7b 2024-01-26
InternLM-XComposer2-VL Benchmark, VL-Chat ??internlm-xcomposer2-vl-7b internlm-xcomposer2-vl-7b 2024-01-26
InternLM-XComposer2-4bit Text-Image Composition ??internlm-xcomposer2-7b-4bit internlm-xcomposer2-7b-4bit 2024-02-06
InternLM-XComposer2-VL-4bit Benchmark, VL-Chat ??internlm-xcomposer2-vl-7b-4bit internlm-xcomposer2-vl-7b-4bit 2024-02-06
InternLM-XComposer Text-Image Composition, VL-Chat ??internlm-xcomposer-7b internlm-xcomposer-7b 2023-09-26
InternLM-XComposer-4bit Text-Image Composition, VL-Chat ??internlm-xcomposer-7b-4bit internlm-xcomposer-7b-4bit 2023-09-26
InternLM-XComposer-VL Benchmark ??internlm-xcomposer-vl-7b internlm-xcomposer-vl-7b 2023-09-26

工具鏈

  • InternEvo: 支持大模型預(yù)訓(xùn)練和微調(diào)的輕量級(jí)框架。
  • XTuner: 高效的大模型微調(diào)工具,支持多種大模型和多種調(diào)優(yōu)方法。
  • LMDeploy: 用來壓縮、部署和使用大模型的工具。
  • Lagent: 讓用戶高效率的開發(fā)agent的輕量級(jí)框架.
  • AgentLego: 擴(kuò)展和增強(qiáng)agent的一組類庫和工具
  • OpenCompass: 大模型評(píng)測平臺(tái).
  • OpenAOE: 大模型比對(duì)工具.

InternEvo

XTuner

GitHub地址:https://github.com/InternLM/InternEvo/
支持多種大模型的預(yù)訓(xùn)練和微調(diào)

LMDeploy

GitHub地址:https://github.com/InternLM/lmdeploy

LLMs

Llama (7B - 65B)
Llama2 (7B - 70B)
Llama3 (8B, 70B)
Llama3.1 (8B, 70B)
Llama3.2 (1B, 3B)
InternLM (7B - 20B)
InternLM2 (7B - 20B)
InternLM3 (8B)
InternLM2.5 (7B)
Qwen (1.8B - 72B)
Qwen1.5 (0.5B - 110B)
Qwen1.5 - MoE (0.5B - 72B)
Qwen2 (0.5B - 72B)
Qwen2-MoE (57BA14B)
Qwen2.5 (0.5B - 32B)
Baichuan (7B)
Baichuan2 (7B-13B)
Code Llama (7B - 34B)
ChatGLM2 (6B)
GLM4 (9B)
CodeGeeX4 (9B)
Falcon (7B - 180B)
YI (6B-34B)
Mistral (7B)
DeepSeek-MoE (16B)
DeepSeek-V2 (16B, 236B)
DeepSeek-V2.5 (236B)
Mixtral (8x7B, 8x22B)
Gemma (2B - 7B)
Dbrx (132B)
StarCoder2 (3B - 15B)
Phi-3-mini (3.8B)
Phi-3.5-mini (3.8B)
Phi-3.5-MoE (16x3.8B)
MiniCPM3 (4B)

Lagent

GitHub地址:https://github.com/InternLM/lagent
高效、輕量級(jí)的開發(fā)工具,大大提高agent的開發(fā)效率。

lagent

AgentLego

提供了多種類庫,支持開發(fā)強(qiáng)大的智能體
General ability

Speech related

Image-processing related

  • ImageDescription: Describe the input image.
  • OCR: Recognize the text from a photo.
  • VQA: Answer the question according to the image.
  • HumanBodyPose: Estimate the pose or keypoints of human in an image.
  • HumanFaceLandmark: Estimate the landmark or keypoints of human faces in an image.
  • ImageToCanny: Extract the edge image from an image.
  • ImageToDepth: Generate the depth image of an image.
  • ImageToScribble: Generate a sketch scribble of an image.
  • ObjectDetection: Detect all objects in the image.
  • TextToBbox: Detect specific objects described by the given text in the image.
  • Segment Anything series
    • SegmentAnything: Segment all items in the image.
    • SegmentObject: Segment the certain objects in the image according to the given object name.

AIGC related

OpenCompass

GitHub地址: https://github.com/open-compass/opencompass

image.png

OpenAOE

What can you get from OpenAOE?

OpenAOE can:

  1. return one or more LLMs' answers at the same time by a single prompt.
  2. provide access to commercial LLM APIs, with default support for gpt3.5, gpt4, Google Palm, Minimax, Claude, Spark, etc., and also support user-defined access to other large model APIs. (API keys need to be prepared in advanced)
  3. provide access to open-source LLM APIs. ( We recommend to use LMDeploy to deploy with one click)
  4. provide backend APIs and a WEB-UI to meet the needs of different requirements.

應(yīng)用

HuixiangDou茴香豆

GitHub地址:https://github.com/InternLM/HuixiangDou
HuixiangDou1 is a professional knowledge assistant based on LLM.

Advantages:

  1. Design three-stage pipelines of preprocess, rejection and response
  2. No training required, with CPU-only, 2G, 10G, 20G and 80G configuration
  3. Offers a complete suite of Web, Android, and pipeline source code, industrial-grade and commercially viable

Check out the scenes in which HuixiangDou are running and join WeChat Group to try AI assistant inside.

If this helps you, please give it a star

MindSearch

GitHub地址: https://github.com/InternLM/MindSearch
MindSearch 是一個(gè)開源的 AI 搜索引擎框架,具有與 Perplexity.ai Pro 相同的性能。您可以輕松部署它來構(gòu)建您自己的搜索引擎,可以使用閉源 LLM(如 GPT、Claude)或開源 LLM(InternLM2.5 系列模型經(jīng)過專門優(yōu)化,能夠在 MindSearch 框架中提供卓越的性能;其他開源模型沒做過具體測試)。其擁有以下特性:

  • ?? 任何想知道的問題:MindSearch 通過搜索解決你在生活中遇到的各種問題
  • ?? 深度知識(shí)探索:MindSearch 通過數(shù)百網(wǎng)頁的瀏覽,提供更廣泛、深層次的答案
  • ?? 透明的解決方案路徑:MindSearch 提供了思考路徑、搜索關(guān)鍵詞等完整的內(nèi)容,提高回復(fù)的可信度和可用性。
  • ?? 多種用戶界面:為用戶提供各種接口,包括 React、Gradio、Streamlit 和本地調(diào)試。根據(jù)需要選擇任意類型。
  • ?? 動(dòng)態(tài)圖構(gòu)建過程:MindSearch 將用戶查詢分解為圖中的子問題節(jié)點(diǎn),并根據(jù) WebSearcher 的搜索結(jié)果逐步擴(kuò)展圖
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容