TI中文支持网
TI专业的中文技术问题搜集分享网站

TDA4VL-Q1: DLR Model Infer Slowly

Part Number:TDA4VL-Q1

After the ONNX model converted to DLR model by using "examples/osrt_python/tvm_dlr/tvm_compilation_onnx_example.py", I infer it on TDA4VL slowly by c++ code, about 10000ms per frame, and the time of inferring by python DLR API same as above.

The key point is that inferring the ONNX model by python onnxruntime directly will be so fast, about 12ms per frame.

ONNX model download url: http://software-dl.ti.com/jacinto7/esd/modelzoo/09_02_00/models/vision/detection/coco/edgeai-yolox/yolox-s-ti-lite_39p1_57p9.onnx

PS: I deleted NMS part after downloaded it !

Python DLR code :

import dlr
import numpy as np

# Load model.
# /path/to/model is a directory containing the compiled model artifacts (.so, .params, .json)
model = dlr.DLRModel('/opt/yolox-s-ti-lite_39p1_57p9_nonms.onnx', 'cpu', 0)

# Prepare some input data.
x = np.random.rand(1, 3, 640, 640)

# Run inference.
y = model.run(x)

The onnxruntime code is "examples/osrt_python/ort/onnxrt_ep.py", and the run shell is "python3 onnxrt_ep.py -m yolox-s-ti-lite_39p1_57p9.onnx"

Taylor:

Hi,

For your query, please post your query in E2E Forum as link below.https://e2e.ti.com/TI's product line experts will answer your question.

赞(0)
未经允许不得转载:TI中文支持网 » TDA4VL-Q1: DLR Model Infer Slowly
分享到: 更多 (0)