TimesFM 2.0 – 500M PyTorch 是由 Google Research 开发的预训练时间序列基础模型,专为时间序列预测而设计。该模型能够处理长达 2048 个时间点的上下文长度,并支持任意的预测范围。TimesFM 2.0 在多个领先基准测试中表现优异,性能比其前代版本提升了 25%。该模型还提供了 10 个实验性的分位数头,尽管这些头在预训练后尚未校准。用户可以通过 Hugging Face 平台下载并使用该模型进行时间序列预测。
This model can be used in scenarios such as predicting retail sales, stock movements, website traffic, etc. TimesFM 2.0 is ranked #1 on the GIFT-Eval rating list and supports fine-tuning with your own data. It performs univariate time series forecasting for up to 2048 points in time, as well as any forecast range length, with an optional frequency indicator.
Function List
- time series forecast: Supports context lengths up to 2048 time points and arbitrary prediction ranges.
- quantile forecast: 10 experimental quartile heads are provided.
- Model fine-tuning: Support for model fine-tuning on user-owned data.
- Zero sample covariate support: Support for zero-sample prediction using external regression variables.
- High performance: Outperforms in multiple benchmarks with a performance boost of 25%.
Using Help
Installation process
- Installation of dependencies::
- utilization
pyenv
cap (a poem)poetry
Perform a local installation. - Make sure the Python version is 3.10.x (for JAX versions) or >=3.11.x (for PyTorch versions).
- Run the following command to install the dependencies:
pyenv install 3.11.x pyenv virtualenv 3.11.x timesfm-env pyenv activate timesfm-env poetry install
- utilization
- Download model::
- 访问 Hugging Face 平台下载 TimesFM 2.0 – 500M PyTorch 模型检查点。
- Use the following command to download the model:
bash
git clone https://huggingface.co/google/timesfm-2.0-500m-pytorch
cd timesfm-2.0-500m-pytorch
Usage Process
- Loading Models::
- Load the model in the Python environment:
from transformers import TimesFMForTimeSeriesForecasting model = TimesFMForTimeSeriesForecasting.from_pretrained("google/timesfm-2.0-500m-pytorch")
- carry out forecasting::
- Prepare input data and make predictions:
import torch input_data = torch.tensor([...]) # 替换为实际的时间序列数据 predictions = model(input_data)
- fine-tuned model::
- Model fine-tuning using own data:
from transformers import Trainer, TrainingArguments training_args = TrainingArguments(output_dir="./results", num_train_epochs=3, per_device_train_batch_size=4) trainer = Trainer(model=model, args=training_args, train_dataset=your_dataset) trainer.train()
- Use of external regression variables::
- Support for zero-sample covariate prediction:
python
external_regressors = torch.tensor([...]) # 替换为实际的外部回归变量数据
predictions = model(input_data, external_regressors=external_regressors)
- Support for zero-sample covariate prediction: