HuggingFace

  • Bilibili視頻:白嫖AI大模型-HuggingFace
  • 官網(wǎng):https://huggingface.co/
  • 網(wǎng)絡(luò)
    • 大部分操作使用hf-mirror鏡像可以完成
    • 目前唯一需要"科學(xué)"網(wǎng)絡(luò)環(huán)境的是登錄及獲取Token(下載模型使用),且Token只需用配置一次,就無需再登錄

起步

安裝依賴

  • torch/tensorflow根據(jù)不同項目需要安裝
# 安裝
pip install huggingface_hub

# (按項目需要)安裝PyTorch的GPU版,及huggingface輔助特性
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
pip install 'huggingface_hub[torch]'

# (按項目需要)安裝tensorflow,及huggingface輔助特性
pip install tensorflow
pip install 'huggingface_hub[tensorflow]'


# 其他常用依賴
pip install transformers accelerate

# 項目可能有各自需要的依賴,看項目介紹去安裝

注冊/登錄

  • 目前注冊/登錄必須有"科學(xué)"上網(wǎng)環(huán)境
  • 很多倉庫下載需要權(quán)限,所以先配置token
    • 注冊HuggingFace賬號,并登錄
    • 創(chuàng)建AccessToken路徑:右上角用戶頭像 > "Settings" > "Access Tokens" > "Create new token" > "Token Type"切換到"Read" > 隨意填個名字 > "Create token" > 復(fù)制下來,格式"hf_***"
  • 如下代碼登錄
    • 登錄后,token存儲在"~/.cache/huggingface/token"
    • 僅需登錄(運行)一次,除非token失效
from huggingface_hub import login

token = 'hf_***'
login(token)

下載模型

  • 國內(nèi)鏡像:https://hf-mirror.com/
    • 主頁有下載方式匯總
  • 大模型隨隨便便就好幾G,注意磁盤空間大小

使用cli

  • 使用方法看官網(wǎng)介紹:https://huggingface.co/docs/huggingface_hub/guides/cli
    • --local-dir: 下載到本地的目錄,未指定時,默認下載路徑:~/.cache/huggingface/
    • --token: 鑒權(quán)token,使用上述方法登錄過的不需要該參數(shù)
# 安裝工具
pip install huggingface_hub
# 設(shè)置鏡像地址
export HF_ENDPOINT=https://hf-mirror.com


# 下載整個庫
huggingface-cli download 'stabilityai/stable-diffusion-3-medium-diffusers'
huggingface-cli download 'stabilityai/stable-diffusion-3-medium-diffusers' --local-dir 'sd'
huggingface-cli download 'stabilityai/stable-diffusion-3-medium-diffusers' --local-dir 'sd' --token 'hf_****'

# 下載特定文件
huggingface-cli download 'stabilityai/stable-diffusion-3-medium-diffusers' 'xxx.safetensors'
huggingface-cli download 'stabilityai/stable-diffusion-3-medium-diffusers' 'xxx.safetensors' --local-dir 'sd'
huggingface-cli download 'stabilityai/stable-diffusion-3-medium-diffusers' 'xxx.safetensors' --local-dir 'sd' --token 'hf_****'

使用python

  • 下載
    • 未指定local_dir參數(shù)時,默認下載路徑:~/.cache/huggingface/
import os
from huggingface_hub import hf_hub_download, snapshot_download

# 設(shè)置鏡像地址
os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com'

# 下載整個倉庫
snapshot_download(repo_id="stabilityai/stable-diffusion-3-medium-diffusers")
snapshot_download(repo_id="stabilityai/stable-diffusion-3-medium-diffusers", local_dir="sd")
snapshot_download(repo_id="stabilityai/stable-diffusion-3-medium-diffusers", local_dir="sd", token="hd_***")

# 下載特定文件
hf_hub_download(repo_id="stabilityai/stable-diffusion-3-medium-diffusers", filename="xxx.safetensors")
hf_hub_download(repo_id="stabilityai/stable-diffusion-3-medium-diffusers", filename="xxx.safetensors", local_dir="sd")
hf_hub_download(repo_id="stabilityai/stable-diffusion-3-medium-diffusers", filename="xxx.safetensors", local_dir="sd", token="hd_***")

示例

  • 每個項目運行都有差別,注意看項目的README
    • 一部分要加載項目中的"model_index.json"配置運行
    • 一部分使用ComfyUI運行

文生圖(emilianJR/epiCRealism)

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
pip install 'huggingface_hub[torch]'

pip install transformers accelerate diffusers
  • 下載模型
    • 默認下載路徑:~/.cache/huggingface/
# 設(shè)置鏡像
export HF_ENDPOINT=https://hf-mirror.com

# 下載模型
huggingface-cli download 'emilianJR/epiCRealism'
# 默認空間不夠的,--local-dir參數(shù)指定下載路徑,使用時替換成該路徑即可
huggingface-cli download 'emilianJR/epiCRealism' --local-dir 'models/emilianJR/epiCRealism'
  • 執(zhí)行
from diffusers import StableDiffusionPipeline
import torch

model_id = "emilianJR/epiCRealism"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "提示詞,要用英文,這個模型中文效果差"
image = pipe(prompt).images[0]

image.save("image.png")
  • 包裝Gradio使用
pip install gradio
from diffusers import StableDiffusionPipeline
import torch
import gradio as gr

model_id = "emilianJR/epiCRealism"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")


def generate(prompt: str):
    image = pipe(prompt).images[0]
    return image


demo = gr.Interface(fn=generate,
                    inputs=gr.Textbox(label="提示詞,使用英文,中文兼容性不好"),
                    outputs=gr.Image(),
                    examples=["A girl smiling", "A boy smiling", "A dog running"])
demo.launch()

文生視頻(ByteDance/AnimateDiff-Lightning)

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
pip install 'huggingface_hub[torch]'

pip install transformers accelerate diffusers
  • 輕量文生視頻,需要借助文生圖模型,再生成視頻
    • 文生圖模型使用上述emilianJR/epiCRealism
# 設(shè)置鏡像
export HF_ENDPOINT=https://hf-mirror.com

# 基礎(chǔ)文生圖模型
huggingface-cli download 'emilianJR/epiCRealism'
# 下載文生視頻模型
huggingface-cli download 'ByteDance/AnimateDiff-Lightning' 'animatediff_lightning_4step_diffusers.safetensors'

# 也可以下載整個倉庫,倉庫包含多個級別的模型文件
huggingface-cli download 'ByteDance/AnimateDiff-Lightning'

  • 示例
    • 生成結(jié)果是gif動圖
import torch
from diffusers import AnimateDiffPipeline, MotionAdapter, EulerDiscreteScheduler
from diffusers.utils import export_to_gif
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file

device = "cuda"
dtype = torch.float16

step = 4  # Options: [1,2,4,8]
repo = "ByteDance/AnimateDiff-Lightning"
ckpt = f"animatediff_lightning_{step}step_diffusers.safetensors"
base = "emilianJR/epiCRealism"  # Choose to your favorite base model.

adapter = MotionAdapter().to(device, dtype)
adapter.load_state_dict(load_file(hf_hub_download(repo ,ckpt), device=device))
pipe = AnimateDiffPipeline.from_pretrained(base, motion_adapter=adapter, torch_dtype=dtype).to(device)
pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing", beta_schedule="linear")

output = pipe(prompt="A girl smiling", guidance_scale=1.0, num_inference_steps=step)
export_to_gif(output.frames[0], "animation.gif")

文生圖(stable-diffusion-3)

pip install huggingface_hub

# 安裝PyTorch的CUDA(GPU)版,安裝命令參考PyTorch官網(wǎng):https://pytorch.org/
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
# 或者Torch的CPU版,建議上GPU,CPU根本跑不動
pip install 'huggingface_hub[torch]'

# 其他依賴
pip install transformers
pip install accelerate
pip install diffusers
pip install sentencepiece
pip install protobuf
  • 下載模型
    • 默認下載路徑:~/.cache/huggingface/
    • --local-dir指定下載目錄,運行代碼時將模型設(shè)置為相同目錄即可
# 設(shè)置環(huán)境變了
export HF_ENDPOINT=https://hf-mirror.com

# 下載整個庫
huggingface-cli download 'stabilityai/stable-diffusion-3-medium-diffusers'
huggingface-cli download 'stabilityai/stable-diffusion-3-medium-diffusers' --local-dir 'sd'
  • 運行代碼
    • 提示詞使用英文,中文效果很差
import torch
from diffusers import StableDiffusion3Pipeline

pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16)
# pipe = StableDiffusion3Pipeline.from_pretrained("sd", torch_dtype=torch.float16) # 指定目錄

pipe = pipe.to("cuda")

image = pipe("A cat holding a sign that says hello world",
             negative_prompt="",
             num_inference_steps=28,
             guidance_scale=7.0).images[0]

# 保存成圖片
image.save('cat.jpg')

  • GPU顯存小于16GB報錯
torch.OutOfMemoryError: CUDA out of memory. 
Tried to allocate 512.00 MiB. GPU 0 has a total capacity of 11.00 GiB of which 0 bytes is free. 
Of the allocated memory 16.80 GiB is allocated by PyTorch, and 574.00 MiB is reserved by PyTorch but unallocated. 
If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. 
See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

可能遇到問題

報錯缺失'fbgemm.dll'

  • 報錯內(nèi)容
OSError: [WinError 126] 找不到指定的模塊。 Error loading "...\site-packages\torch\lib\fbgemm.dll" or one of its dependencies.
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容