The Pipeline API is one of the most core features of Transformers, and the process of using it is as follows:
1. Examples of text generation:
from transformers import pipeline
generator = pipeline(task="text-generation", model="Qwen/Qwen2.5-1.5B")
result = generator("The secret to baking a really good cake is")
print(result[0]["generated_text"])
2. Examples of speech recognition:
asr = pipeline(task="automatic-speech-recognition", model="openai/whisper-large-v3")
result = asr("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac")
print(result["text"])
Key Points:
- Models are automatically downloaded and cached to ~/.cache/huggingface/hub
- The cache path can be modified via the TRANSFORMERS_CACHE environment variable
- Supports local audio files or URLs as input
This answer comes from the articleTransformers: open source machine learning modeling framework with support for text, image and multimodal tasksThe































