The complete process for deploying the 7B model consists of three key steps:
- Environment Configuration: Python 3.8+ environment, PyTorch and transformers libraries should be prepared (
pip install torch transformers), it is recommended to use GPUs that support FP16 precision such as NVIDIA A100/A800, ensuring at least 15GB of video memory - Model Acquisition::
- Cloning GitHub repositories:
git clone https://github.com/ECNU-ICALK/EduChat.git - Download the model file from Hugging Face:
huggingface-cli download ecnu-icalk/educhat-sft-002-7b
- Cloning GitHub repositories:
- Model loading: Initialize the dialog system using the following Python code:
from transformers import LlamaForCausalLM, LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained('ecnu-icalk/educhat-sft-002-7b')
model = LlamaForCausalLM.from_pretrained('ecnu-icalk/educhat-sft-002-7b', torch_dtype=torch.float16).half().cuda()
After the deployment is completed, you can switch different dialog modes by modifying the theme switch (Psychology/Socrates/General) in system_prompt. Note that you need to adjust the batch size according to the video memory condition to avoid memory overflow.
This answer comes from the articleEduChat: Open Source Education Dialogue ModelThe





























