diff --git a/README.md b/README.md
index add5c10..e3e4a98 100644
--- a/README.md
+++ b/README.md
@@ -144,7 +144,7 @@ CPU: Intel(R) Core(TM) i9-10980XE CPU @ 3.00GHz
* CUDA == 12.2
* [requirements.txt](./requirements.txt)
-# 📌 Quick Inference & Test
+# 📌 Quick Start Test

@@ -197,6 +197,16 @@ streamlit run fast_inference.py
```bash
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
```
+
+ ```python
+ # 测试torch是否可用cuda
+ import torch
+ print(torch.cuda.is_available())
+ ```
+
+ > 如果不可用,请自行去[torch_stable](https://download.pytorch.org/whl/torch_stable.html)
+ 下载whl文件安装。参考[链接](https://blog.csdn.net/weixin_45456738/article/details/141029610?ops_request_misc=&request_id=&biz_id=102&utm_term=%E5%AE%89%E8%A3%85torch&utm_medium=distribute.pc_search_result.none-task-blog-2~all~sobaiduweb~default-2-141029610.nonecase&spm=1018.2226.3001.4187)
+
* 2、如果你需要自己训练
diff --git a/README_en.md b/README_en.md
index d57ffa7..6479117 100644
--- a/README_en.md
+++ b/README_en.md
@@ -156,7 +156,7 @@ Environment: python 3.9 + Torch 2.1.2 + DDP multi-GPU training
* CUDA == 12.2
* [requirements.txt](./requirements.txt)
-# 📌 Quick Inference & Test
+# 📌 Quick Start Test

@@ -212,7 +212,16 @@ The project has been deployed to ModelScope makerspace, where you can experience
```bash
pip install -r requirements.txt
```
+
+ ```python
+ # Test if torch can use CUDA
+ import torch
+ print(torch.cuda.is_available())
+ ```
+ > If it is not available, please go to [torch_stable](https://download.pytorch.org/whl/torch_stable.html)
+ to download the whl file for installation. Refer to [this link](https://blog.csdn.net/weixin_45456738/article/details/141029610?ops_request_misc=&request_id=&biz_id=102&utm_term=安装torch&utm_medium=distribute.pc_search_result.none-task-blog-2~all~sobaiduweb~default-2-141029610.nonecase&spm=1018.2226.3001.4187)
+
* 2.If you need to train the model yourself
* 2.1 Download the [dataset download link](#dataset-download-links) and place it in the `./dataset` directory.