55,039
社区成员




from ultralytics import YOLO
# Load a model
# model = YOLO("yolo11n.pt") # load an official model
# model = YOLO("./runs/train/exp2/weights/best.pt") # load a custom trained model
model = YOLO("./runs/train/exp9/weights/best.pt")
# Export the model
model.export(format="tflite",half=True,int8=True,imgsz=320,device=0,nms=True,opset=17,simplify=True,batch=1)
导出在终端打印的日志:
Ultralytics YOLOv8.2.0 🚀 Python-3.9.21 torch-2.1.0+cu118 CUDA:0 (NVIDIA GeForce RTX 3060, 12288MiB)
Model summary (fused): 168 layers, 3006038 parameters, 0 gradients, 8.1 GFLOPsPyTorch: starting from 'runs\train\exp9\weights\best.pt' with input shape (1, 3, 320, 320) BCHW and output shape(s) (1, 6, 2100) (5.9 MB)
TensorFlow SavedModel: starting export with tensorflow 2.13.1...
ONNX: starting export with onnx 1.14.0 opset 17...
ONNX: simplifying with onnxsim 0.4.36...
ONNX: export success ✅ 0.8s, saved as 'runs\train\exp9\weights\best.onnx' (11.6 MB)
TensorFlow SavedModel: starting TFLite export with onnx2tf 1.17.5...Automatic generation of each OP name started ========================================
Automatic generation of each OP name complete!Model loaded ========================================================================
Model conversion started ============================================================
saved_model output started ==========================================================
saved_model output complete!
Float32 tflite output complete!
Float16 tflite output complete!
Input signature information for quantization
signature_name: serving_default
input_name.0: images shape: (1, 320, 320, 3) dtype: <dtype: 'float32'>
Dynamic Range Quantization tflite output complete!
fully_quantize: 0, inference_type: 6, input_inference_type: FLOAT32, output_inference_type: FLOAT32
INT8 Quantization tflite output complete!
fully_quantize: 0, inference_type: 6, input_inference_type: INT8, output_inference_type: INT8
Full INT8 Quantization tflite output complete!
INT8 Quantization with int16 activations tflite output complete!
Full INT8 Quantization with int16 activations tflite output complete!
TensorFlow SavedModel: export success ✅ 96.7s, saved as 'runs\train\exp9\weights\best_saved_model' (38.4 MB)TensorFlow Lite: starting export with tensorflow 2.13.1...
TensorFlow Lite: export success ✅ 0.0s, saved as 'runs\train\exp9\weights\best_saved_model\best_int8.tflite' (3.0 MB)Export complete (97.4s)
Results saved to E:\ultralytics\ultralytics-8.2.0\runs\train\exp9\weights
Predict: yolo predict task=detect model=runs\train\exp9\weights\best_saved_model\best_int8.tflite imgsz=320 int8
Validate: yolo val task=detect model=runs\train\exp9\weights\best_saved_model\best_int8.tflite imgsz=320 data=./data.yaml int8
Visualize: https://netron.app
请哪位有经验的大佬,帮忙给出一些建议,多谢。