{"id":32251,"date":"2025-07-09T08:49:40","date_gmt":"2025-07-09T00:49:40","guid":{"rendered":"https:\/\/www.kdjingpai.com\/?p=32251"},"modified":"2025-07-09T08:49:40","modified_gmt":"2025-07-09T00:49:40","slug":"finetuningllms","status":"publish","type":"post","link":"https:\/\/www.kdjingpai.com\/de\/finetuningllms\/","title":{"rendered":"FineTuningLLMs\uff1a\u5355GPU\u9ad8\u6548\u5fae\u8c03\u5927\u8bed\u8a00\u6a21\u578b\u7684\u5b9e\u7528\u6307\u5357"},"content":{"rendered":"<p>FineTuningLLMs \u662f\u7531\u4f5c\u8005 dvgodoy \u521b\u5efa\u7684 GitHub \u4ed3\u5e93\uff0c\u57fa\u4e8e\u5176\u4e66\u7c4d\u300aA Hands-On Guide to Fine-Tuning LLMs with PyTorch and Hugging Face\u300b\u3002\u8fd9\u4e2a\u4ed3\u5e93\u4e3a\u5f00\u53d1\u8005\u63d0\u4f9b\u4e86\u4e00\u4e2a\u5b9e\u7528\u4e14\u7cfb\u7edf\u7684\u6307\u5357\uff0c\u4e13\u6ce8\u4e8e\u5728\u5355\u5757\u6d88\u8d39\u7ea7 GPU \u4e0a\u9ad8\u6548\u5fae\u8c03\u5927\u8bed\u8a00\u6a21\u578b\uff08LLM\uff09\u3002\u5b83\u7ed3\u5408 Hugging Face \u751f\u6001\u7cfb\u7edf\uff0c\u8bb2\u89e3\u5982\u4f55\u4f7f\u7528 PyTorch\u3001LoRA \u9002\u914d\u5668\u3001\u91cf\u5316\u6280\u672f\u7b49\u5de5\u5177\u4f18\u5316\u6a21\u578b\u6027\u80fd\u3002\u4ed3\u5e93\u5185\u5bb9\u6db5\u76d6\u4ece\u6a21\u578b\u52a0\u8f7d\u5230\u90e8\u7f72\u7684\u5b8c\u6574\u6d41\u7a0b\uff0c\u9002\u5408\u673a\u5668\u5b66\u4e60\u4ece\u4e1a\u8005\u548c\u7814\u7a76\u4eba\u5458\u3002\u7528\u6237\u53ef\u4ee5\u901a\u8fc7\u4ee3\u7801\u793a\u4f8b\u548c\u8be6\u7ec6\u6587\u6863\u5b66\u4e60\u5fae\u8c03\u3001\u90e8\u7f72\u548c\u6545\u969c\u6392\u9664\u7684\u65b9\u6cd5\u3002\u8be5\u9879\u76ee\u4ee5\u5f00\u6e90\u5f62\u5f0f\u5206\u4eab\uff0c\u9f13\u52b1\u793e\u533a\u8d21\u732e\u548c\u5b66\u4e60\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-32252\" title=\"FineTuningLLMs\uff1a\u5355GPU\u9ad8\u6548\u5fae\u8c03\u5927\u8bed\u8a00\u6a21\u578b\u7684\u5b9e\u7528\u6307\u5357-1\" src=\"https:\/\/www.kdjingpai.com\/wp-content\/uploads\/2025\/07\/901a230e9d54eb0.jpg\" alt=\"FineTuningLLMs\uff1a\u5355GPU\u9ad8\u6548\u5fae\u8c03\u5927\u8bed\u8a00\u6a21\u578b\u7684\u5b9e\u7528\u6307\u5357-1\" width=\"773\" height=\"1000\" srcset=\"https:\/\/www.kdjingpai.com\/wp-content\/uploads\/2025\/07\/901a230e9d54eb0.jpg 773w, https:\/\/www.kdjingpai.com\/wp-content\/uploads\/2025\/07\/901a230e9d54eb0-9x12.jpg 9w\" sizes=\"auto, (max-width: 773px) 100vw, 773px\" \/><\/p>\n<p>&nbsp;<\/p>\n<h2>\u529f\u80fd\u5217\u8868<\/h2>\n<ul>\n<li>\u63d0\u4f9b\u5b8c\u6574\u7684 LLM \u5fae\u8c03\u6d41\u7a0b\uff0c\u8986\u76d6\u6570\u636e\u9884\u5904\u7406\u3001\u6a21\u578b\u52a0\u8f7d\u548c\u53c2\u6570\u4f18\u5316\u3002<\/li>\n<li>\u652f\u6301\u4f7f\u7528 LoRA \u548c\u91cf\u5316\u6280\u672f\uff0c\u964d\u4f4e\u5355 GPU \u5fae\u8c03\u7684\u786c\u4ef6\u9700\u6c42\u3002<\/li>\n<li>\u96c6\u6210 Hugging Face \u751f\u6001\uff0c\u63d0\u4f9b\u9884\u8bad\u7ec3\u6a21\u578b\u548c\u5de5\u5177\u7684\u914d\u7f6e\u793a\u4f8b\u3002<\/li>\n<li>\u5305\u542b Flash Attention \u548c PyTorch SDPA \u7684\u6027\u80fd\u5bf9\u6bd4\uff0c\u4f18\u5316\u6a21\u578b\u8bad\u7ec3\u901f\u5ea6\u3002<\/li>\n<li>\u652f\u6301\u5c06\u5fae\u8c03\u6a21\u578b\u8f6c\u6362\u4e3a GGUF \u683c\u5f0f\uff0c\u4fbf\u4e8e\u672c\u5730\u90e8\u7f72\u3002<\/li>\n<li>\u63d0\u4f9b <a href=\"https:\/\/www.kdjingpai.com\/de\/ollama\/\">Ollama<\/a> \u548c <a href=\"https:\/\/www.kdjingpai.com\/de\/llamacpp\/\">llama.cpp<\/a> \u7684\u90e8\u7f72\u6307\u5357\uff0c\u7b80\u5316\u6a21\u578b\u4e0a\u7ebf\u6d41\u7a0b\u3002<\/li>\n<li>\u5305\u542b\u6545\u969c\u6392\u9664\u6307\u5357\uff0c\u5217\u51fa\u5e38\u89c1\u9519\u8bef\u53ca\u5176\u89e3\u51b3\u65b9\u6cd5\u3002<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<h2>\u4f7f\u7528\u5e2e\u52a9<\/h2>\n<h3>\u5b89\u88c5\u6d41\u7a0b<\/h3>\n<p>FineTuningLLMs \u662f\u4e00\u4e2a\u57fa\u4e8e GitHub \u7684\u4ee3\u7801\u4ed3\u5e93\uff0c\u7528\u6237\u9700\u8981\u5148\u5b89\u88c5\u5fc5\u8981\u7684\u5f00\u53d1\u73af\u5883\u3002\u4ee5\u4e0b\u662f\u8be6\u7ec6\u7684\u5b89\u88c5\u548c\u914d\u7f6e\u6b65\u9aa4\uff1a<\/p>\n<ol>\n<li><strong>\u514b\u9686\u4ed3\u5e93<\/strong><br \/>\n\u6253\u5f00\u7ec8\u7aef\uff0c\u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\u5c06\u4ed3\u5e93\u514b\u9686\u5230\u672c\u5730\uff1a<\/p>\n<pre><code>git clone https:\/\/github.com\/dvgodoy\/FineTuningLLMs.git\r\ncd FineTuningLLMs\r\n<\/code><\/pre>\n<\/li>\n<li><strong>\u5b89\u88c5 Python \u73af\u5883<\/strong><br \/>\n\u786e\u4fdd\u7cfb\u7edf\u5df2\u5b89\u88c5 Python 3.8 \u6216\u66f4\u9ad8\u7248\u672c\u3002\u63a8\u8350\u4f7f\u7528\u865a\u62df\u73af\u5883\u4ee5\u9694\u79bb\u4f9d\u8d56\uff1a<\/p>\n<pre><code>python -m venv venv\r\nsource venv\/bin\/activate  # Linux\/Mac\r\nvenv\\Scripts\\activate     # Windows\r\n<\/code><\/pre>\n<\/li>\n<li><strong>\u5b89\u88c5\u4f9d\u8d56<\/strong><br \/>\n\u4ed3\u5e93\u63d0\u4f9b\u4e86\u4e00\u4e2a\u00a0<code>requirements.txt<\/code>\u00a0\u6587\u4ef6\uff0c\u5305\u542b\u5fc5\u8981\u7684 Python \u5e93\uff08\u5982 PyTorch\u3001Hugging Face transformers \u7b49\uff09\u3002\u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\u5b89\u88c5\uff1a<\/p>\n<pre><code>pip install -r requirements.txt\r\n<\/code><\/pre>\n<\/li>\n<li><strong>\u5b89\u88c5\u53ef\u9009\u5de5\u5177<\/strong>\n<ul>\n<li>\u5982\u679c\u9700\u8981\u90e8\u7f72\u6a21\u578b\uff0c\u5b89\u88c5 Ollama \u6216 llama.cpp\u3002\u6839\u636e\u5b98\u65b9\u6587\u6863\uff0cOllama \u53ef\u901a\u8fc7\u4ee5\u4e0b\u547d\u4ee4\u5b89\u88c5\uff1a\n<pre><code>curl https:\/\/ollama.ai\/install.sh | sh\r\n<\/code><\/pre>\n<\/li>\n<li>\u5982\u679c\u4f7f\u7528 GGUF \u683c\u5f0f\u6a21\u578b\uff0c\u9700\u5b89\u88c5 llama.cpp\uff0c\u53c2\u8003\u5176 GitHub \u9875\u9762\u5b8c\u6210\u914d\u7f6e\u3002<\/li>\n<\/ul>\n<\/li>\n<li><strong>\u9a8c\u8bc1\u73af\u5883<\/strong><br \/>\n\u8fd0\u884c\u4ed3\u5e93\u4e2d\u7684\u793a\u4f8b\u811a\u672c\u00a0<code>test_environment.py<\/code>\uff08\u5982\u6709\uff09\uff0c\u786e\u4fdd\u4f9d\u8d56\u6b63\u786e\u5b89\u88c5\uff1a<\/p>\n<pre><code>python test_environment.py\r\n<\/code><\/pre>\n<\/li>\n<\/ol>\n<h3>\u4e3b\u8981\u529f\u80fd\u64cd\u4f5c<\/h3>\n<h4>1. \u6570\u636e\u9884\u5904\u7406<\/h4>\n<p>FineTuningLLMs \u63d0\u4f9b\u6570\u636e\u683c\u5f0f\u5316\u5de5\u5177\uff0c\u5e2e\u52a9\u7528\u6237\u51c6\u5907\u9002\u5408\u5fae\u8c03\u7684\u8bad\u7ec3\u6570\u636e\u96c6\u3002\u7528\u6237\u9700\u8981\u51c6\u5907 JSON \u6216 CSV \u683c\u5f0f\u7684\u6570\u636e\u96c6\uff0c\u5305\u542b\u8f93\u5165\u6587\u672c\u548c\u76ee\u6807\u8f93\u51fa\u3002\u4ed3\u5e93\u4e2d\u7684\u00a0<code>data_preprocessing.py<\/code>\u00a0\u811a\u672c\uff08\u793a\u4f8b\u6587\u4ef6\uff09\u53ef\u7528\u4e8e\u6e05\u7406\u548c\u683c\u5f0f\u5316\u6570\u636e\u3002\u8fd0\u884c\u547d\u4ee4\uff1a<\/p>\n<pre><code>python data_preprocessing.py --input input.json --output formatted_data.json\r\n<\/code><\/pre>\n<p>\u786e\u4fdd\u8f93\u5165\u6570\u636e\u7b26\u5408 Hugging Face \u7684\u6570\u636e\u96c6\u89c4\u8303\uff0c\u5b57\u6bb5\u901a\u5e38\u5305\u62ec\u00a0<code>text<\/code>\u00a0\u548c\u00a0<code>label<\/code>\u3002<\/p>\n<h4>2. \u6a21\u578b\u5fae\u8c03<\/h4>\n<p>\u4ed3\u5e93\u6838\u5fc3\u529f\u80fd\u662f\u4f7f\u7528 LoRA \u548c\u91cf\u5316\u6280\u672f\u5728\u5355 GPU \u4e0a\u5fae\u8c03\u6a21\u578b\u3002\u7528\u6237\u53ef\u4ee5\u9009\u62e9 Hugging Face \u63d0\u4f9b\u7684\u9884\u8bad\u7ec3\u6a21\u578b\uff08\u5982 LLaMA \u6216 Mistral\uff09\u3002\u914d\u7f6e LoRA \u53c2\u6570\uff08\u5982\u79e9\u548c alpha \u503c\uff09\u5728\u00a0<code>config\/lora_config.yaml<\/code>\u00a0\u6587\u4ef6\u4e2d\u5b8c\u6210\u3002\u793a\u4f8b\u914d\u7f6e\uff1a<\/p>\n<pre><code>lora:\r\nrank: 8\r\nalpha: 16\r\ndropout: 0.1\r\n<\/code><\/pre>\n<p>\u8fd0\u884c\u5fae\u8c03\u811a\u672c\uff1a<\/p>\n<pre><code>python train.py --model_name llama-2-7b --dataset formatted_data.json --output_dir .\/finetuned_model\r\n<\/code><\/pre>\n<p>\u8be5\u811a\u672c\u4f1a\u52a0\u8f7d\u6a21\u578b\u3001\u5e94\u7528 LoRA \u9002\u914d\u5668\u5e76\u5f00\u59cb\u8bad\u7ec3\u3002\u91cf\u5316\u9009\u9879\uff08\u5982 8-bit \u6574\u6570\uff09\u53ef\u901a\u8fc7\u547d\u4ee4\u884c\u53c2\u6570\u542f\u7528\uff1a<\/p>\n<pre><code>python train.py --quantization 8bit\r\n<\/code><\/pre>\n<h4>3. \u6027\u80fd\u4f18\u5316<\/h4>\n<p>\u4ed3\u5e93\u652f\u6301 Flash Attention \u548c PyTorch SDPA \u4e24\u79cd\u6ce8\u610f\u529b\u673a\u5236\u3002\u7528\u6237\u53ef\u5728\u00a0<code>train.py<\/code>\u00a0\u4e2d\u901a\u8fc7\u00a0<code>--attention flash<\/code>\u00a0\u6216\u00a0<code>--attention sdpa<\/code>\u00a0\u9009\u62e9\u673a\u5236\u3002Flash Attention \u901a\u5e38\u901f\u5ea6\u66f4\u5feb\uff0c\u4f46\u5bf9\u786c\u4ef6\u517c\u5bb9\u6027\u8981\u6c42\u8f83\u9ad8\u3002\u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\u67e5\u770b\u6027\u80fd\u5dee\u5f02\uff1a<\/p>\n<pre><code>python benchmark_attention.py --model_name llama-2-7b\r\n<\/code><\/pre>\n<p>\u811a\u672c\u4f1a\u8f93\u51fa\u8bad\u7ec3\u901f\u5ea6\u548c\u5185\u5b58\u5360\u7528\u6570\u636e\uff0c\u65b9\u4fbf\u7528\u6237\u9009\u62e9\u9002\u5408\u7684\u914d\u7f6e\u3002<\/p>\n<h4>4. \u6a21\u578b\u90e8\u7f72<\/h4>\n<p>\u5fae\u8c03\u540e\u7684\u6a21\u578b\u53ef\u8f6c\u6362\u4e3a GGUF \u683c\u5f0f\uff0c\u7528\u4e8e\u672c\u5730\u63a8\u7406\u3002\u8fd0\u884c\u8f6c\u6362\u811a\u672c\uff1a<\/p>\n<pre><code>python convert_to_gguf.py --model_path .\/finetuned_model --output_path model.gguf\r\n<\/code><\/pre>\n<p>\u4f7f\u7528 Ollama \u90e8\u7f72\u6a21\u578b\uff1a<\/p>\n<pre><code>ollama serve --model model.gguf\r\n<\/code><\/pre>\n<p>\u7528\u6237\u53ef\u901a\u8fc7 HTTP API \u6216\u547d\u4ee4\u884c\u4e0e\u6a21\u578b\u4ea4\u4e92\uff1a<\/p>\n<pre><code>curl http:\/\/localhost:11434\/api\/generate -d '{\"model\": \"model.gguf\", \"prompt\": \"\u4f60\u597d\uff0c\u4e16\u754c\uff01\"}'\r\n<\/code><\/pre>\n<h4>5. \u6545\u969c\u6392\u9664<\/h4>\n<p>\u4ed3\u5e93\u5305\u542b\u4e00\u4e2a\u00a0<code>troubleshooting.md<\/code>\u00a0\u6587\u4ef6\uff0c\u5217\u51fa\u5e38\u89c1\u95ee\u9898\uff0c\u5982\u5185\u5b58\u6ea2\u51fa\u6216\u6a21\u578b\u52a0\u8f7d\u5931\u8d25\u3002\u7528\u6237\u53ef\u53c2\u8003\u8be5\u6587\u4ef6\u89e3\u51b3\u9519\u8bef\u3002\u4f8b\u5982\uff0c\u82e5\u9047\u5230 CUDA \u5185\u5b58\u4e0d\u8db3\uff0c\u53ef\u5c1d\u8bd5\u51cf\u5c0f\u6279\u6b21\u5927\u5c0f\uff1a<\/p>\n<pre><code>python train.py --batch_size 4\r\n<\/code><\/pre>\n<h3>\u7279\u8272\u529f\u80fd\u64cd\u4f5c<\/h3>\n<h4>LoRA \u5fae\u8c03<\/h4>\n<p>LoRA\uff08Low-Rank Adaptation\uff09\u662f\u4ed3\u5e93\u7684\u6838\u5fc3\u6280\u672f\uff0c\u5141\u8bb8\u7528\u6237\u4ec5\u66f4\u65b0\u6a21\u578b\u7684\u90e8\u5206\u53c2\u6570\uff0c\u663e\u8457\u964d\u4f4e\u8ba1\u7b97\u9700\u6c42\u3002\u7528\u6237\u9700\u5728\u00a0<code>config\/lora_config.yaml<\/code>\u00a0\u4e2d\u8bbe\u7f6e\u79e9\uff08rank\uff09\u548c\u7f29\u653e\u56e0\u5b50\uff08alpha\uff09\u3002\u8fd0\u884c\u5fae\u8c03\u65f6\uff0cLoRA \u9002\u914d\u5668\u4f1a\u81ea\u52a8\u5e94\u7528\u5230\u6a21\u578b\u7684\u6ce8\u610f\u529b\u5c42\u3002\u7528\u6237\u53ef\u901a\u8fc7\u4ee5\u4e0b\u547d\u4ee4\u9a8c\u8bc1 LoRA \u6548\u679c\uff1a<\/p>\n<pre><code>python evaluate.py --model_path .\/finetuned_model --test_data test.json\r\n<\/code><\/pre>\n<h4>\u91cf\u5316\u652f\u6301<\/h4>\n<p>\u91cf\u5316\u6280\u672f\u5c06\u6a21\u578b\u6743\u91cd\u4ece 16-bit \u6d6e\u70b9\u6570\u8f6c\u6362\u4e3a 8-bit \u6574\u6570\uff0c\u51cf\u5c11\u5185\u5b58\u5360\u7528\u3002\u7528\u6237\u53ef\u5728\u8bad\u7ec3\u6216\u63a8\u7406\u65f6\u542f\u7528\u91cf\u5316\uff1a<\/p>\n<pre><code>python train.py --quantization 8bit\r\n<\/code><\/pre>\n<p>\u91cf\u5316\u540e\u7684\u6a21\u578b\u5728\u6d88\u8d39\u7ea7 GPU\uff08\u5982 NVIDIA RTX 3060\uff09\u4e0a\u4e5f\u80fd\u9ad8\u6548\u8fd0\u884c\u3002<\/p>\n<h4>\u672c\u5730\u90e8\u7f72<\/h4>\n<p>\u901a\u8fc7 Ollama \u6216 llama.cpp\uff0c\u7528\u6237\u53ef\u5c06\u6a21\u578b\u90e8\u7f72\u5230\u672c\u5730\u8bbe\u5907\u3002Ollama \u63d0\u4f9b\u7b80\u5355\u7684 Web \u754c\u9762\uff0c\u9002\u5408\u5feb\u901f\u6d4b\u8bd5\u3002\u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\u542f\u52a8 Web \u754c\u9762\uff1a<\/p>\n<pre><code>ollama web\r\n<\/code><\/pre>\n<p>\u7528\u6237\u53ef\u5728\u6d4f\u89c8\u5668\u8bbf\u95ee\u00a0<code>http:\/\/localhost:11434<\/code>\u00a0\u4e0e\u6a21\u578b\u4ea4\u4e92\u3002<\/p>\n<p>&nbsp;<\/p>\n<h2>\u5e94\u7528\u573a\u666f<\/h2>\n<ol>\n<li><strong>\u4e2a\u6027\u5316\u804a\u5929\u673a\u5668\u4eba<\/strong><br \/>\n\u7528\u6237\u53ef\u5fae\u8c03\u6a21\u578b\u4ee5\u751f\u6210\u7279\u5b9a\u9886\u57df\u7684\u5bf9\u8bdd\uff0c\u5982\u5ba2\u6237\u670d\u52a1\u6216\u6280\u672f\u652f\u6301\u3002\u51c6\u5907\u9886\u57df\u76f8\u5173\u7684\u5bf9\u8bdd\u6570\u636e\u96c6\uff0c\u8fd0\u884c\u5fae\u8c03\u811a\u672c\u540e\uff0c\u6a21\u578b\u80fd\u751f\u6210\u66f4\u7b26\u5408\u7279\u5b9a\u573a\u666f\u7684\u56de\u7b54\u3002<\/li>\n<li><strong>\u6587\u672c\u751f\u6210\u4f18\u5316<\/strong><br \/>\n\u4f5c\u5bb6\u6216\u5185\u5bb9\u521b\u4f5c\u8005\u53ef\u4f7f\u7528\u5fae\u8c03\u540e\u7684\u6a21\u578b\u751f\u6210\u7b26\u5408\u7279\u5b9a\u98ce\u683c\u7684\u6587\u672c\uff0c\u5982\u6280\u672f\u6587\u6863\u6216\u521b\u610f\u5199\u4f5c\u3002\u901a\u8fc7\u8c03\u6574\u8bad\u7ec3\u6570\u636e\uff0c\u6a21\u578b\u53ef\u6a21\u4eff\u76ee\u6807\u6587\u98ce\u3002<\/li>\n<li><strong>\u672c\u5730\u5316\u6a21\u578b\u90e8\u7f72<\/strong><br \/>\n\u4f01\u4e1a\u548c\u5f00\u53d1\u8005\u53ef\u5c06\u5fae\u8c03\u6a21\u578b\u90e8\u7f72\u5230\u672c\u5730\u670d\u52a1\u5668\uff0c\u7528\u4e8e\u79bb\u7ebf\u63a8\u7406\u3002GGUF \u683c\u5f0f\u548c Ollama \u7684\u652f\u6301\u4f7f\u5f97\u5728\u4f4e\u8d44\u6e90\u73af\u5883\u4e2d\u8fd0\u884c\u6a21\u578b\u6210\u4e3a\u53ef\u80fd\u3002<\/li>\n<li><strong>\u6559\u80b2\u4e0e\u7814\u7a76<\/strong><br \/>\n\u5b66\u751f\u548c\u7814\u7a76\u4eba\u5458\u53ef\u5229\u7528\u4ed3\u5e93\u5b66\u4e60 LLM \u5fae\u8c03\u6280\u672f\u3002\u4ee3\u7801\u793a\u4f8b\u548c\u6587\u6863\u9002\u5408\u521d\u5b66\u8005\uff0c\u5e2e\u52a9\u4ed6\u4eec\u7406\u89e3\u91cf\u5316\u3001LoRA \u548c\u6ce8\u610f\u529b\u673a\u5236\u7684\u5b9e\u73b0\u3002<\/li>\n<\/ol>\n<p>&nbsp;<\/p>\n<h2>QA<\/h2>\n<ol>\n<li><strong>FineTuningLLMs \u9002\u5408\u521d\u5b66\u8005\u5417\uff1f<\/strong><br \/>\n\u662f\u7684\uff0c\u4ed3\u5e93\u63d0\u4f9b\u8be6\u7ec6\u7684\u4ee3\u7801\u6ce8\u91ca\u548c\u6587\u6863\uff0c\u9002\u5408\u6709 Python \u548c\u673a\u5668\u5b66\u4e60\u57fa\u7840\u7684\u521d\u5b66\u8005\u3002\u7528\u6237\u9700\u4e86\u89e3\u57fa\u672c\u7684 PyTorch \u548c Hugging Face \u64cd\u4f5c\u3002<\/li>\n<li><strong>\u9700\u8981\u9ad8\u7aef GPU \u5417\uff1f<\/strong><br \/>\n\u4e0d\u9700\u8981\u3002\u4ed3\u5e93\u4e13\u6ce8\u4e8e\u5355 GPU \u5fae\u8c03\uff0c\u6d88\u8d39\u7ea7 GPU\uff08\u5982 12GB \u663e\u5b58\u7684 RTX 3060\uff09\u5373\u53ef\u8fd0\u884c\uff0cLoRA \u548c\u91cf\u5316\u6280\u672f\u8fdb\u4e00\u6b65\u964d\u4f4e\u786c\u4ef6\u8981\u6c42\u3002<\/li>\n<li><strong>\u5982\u4f55\u9009\u62e9\u9002\u5408\u7684\u9884\u8bad\u7ec3\u6a21\u578b\uff1f<\/strong><br \/>\n\u7528\u6237\u53ef\u6839\u636e\u4efb\u52a1\u9700\u6c42\u9009\u62e9\u6a21\u578b\u3002Hugging Face \u63d0\u4f9b\u7684 LLaMA \u6216 <a href=\"https:\/\/www.kdjingpai.com\/de\/le-chat-mistral\/\">Mistral<\/a> \u9002\u5408\u5927\u591a\u6570 NLP \u4efb\u52a1\u3002\u4ed3\u5e93\u6587\u6863\u5efa\u8bae\u4ece\u5c0f\u578b\u6a21\u578b\uff08\u5982 7B \u53c2\u6570\uff09\u5f00\u59cb\u6d4b\u8bd5\u3002<\/li>\n<li><strong>\u90e8\u7f72\u6a21\u578b\u9700\u8981\u989d\u5916\u5de5\u5177\u5417\uff1f<\/strong><br \/>\n\u662f\u7684\uff0c\u63a8\u8350\u4f7f\u7528 Ollama \u6216 llama.cpp \u8fdb\u884c\u90e8\u7f72\u3002\u4e24\u8005\u5747\u5f00\u6e90\u4e14\u6613\u4e8e\u5b89\u88c5\uff0c\u5177\u4f53\u6b65\u9aa4\u53c2\u8003\u4ed3\u5e93\u7684\u00a0<code>deploy_guide.md<\/code>\u3002<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>FineTuningLLMs \u662f\u7531\u4f5c\u8005 dvgodoy \u521b\u5efa\u7684 GitHub \u4ed3\u5e93\uff0c\u57fa\u4e8e\u5176\u4e66\u7c4d\u300aA Hands-On Guide to Fine-Tuning LLMs with PyTorch and Hugging Face\u300b\u3002\u8fd9\u4e2a\u4ed3\u5e93&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[365],"class_list":["post-32251","post","type-post","status-publish","format-standard","hentry","category-kecheng","tag-damoxingweidiao"],"_links":{"self":[{"href":"https:\/\/www.kdjingpai.com\/de\/wp-json\/wp\/v2\/posts\/32251","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.kdjingpai.com\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.kdjingpai.com\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.kdjingpai.com\/de\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.kdjingpai.com\/de\/wp-json\/wp\/v2\/comments?post=32251"}],"version-history":[{"count":0,"href":"https:\/\/www.kdjingpai.com\/de\/wp-json\/wp\/v2\/posts\/32251\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.kdjingpai.com\/de\/wp-json\/wp\/v2\/media?parent=32251"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.kdjingpai.com\/de\/wp-json\/wp\/v2\/categories?post=32251"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.kdjingpai.com\/de\/wp-json\/wp\/v2\/tags?post=32251"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}