Loading an ONNX Model

After exporting the model from your preferred framework to ONNX, you can load it for inference using the onnx package. Here’s an example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import onnx

# Load the ONNX model
model = onnx.load("path/to/exported_model.onnx")

# Create an ONNX runtime session
import onnxruntime as ort
session = ort.InferenceSession("path/to/exported_model.onnx")

# Perform inference using the loaded model
input_name = session.get_inputs()[0].name
output_name = session.get_outputs()[0].name
input_data = np.random.rand(1, 3, 224, 224).astype(np.float32) # Replace with your actual input data
output = session.run([output_name], {input_name: input_data})

print(output)

Remember to replace “path/to/exported_model.onnx” with the path to your exported ONNX model, and provide appropriate input data for inference.

That’s it! Now you can export models from different frameworks to ONNX and use the ONNX runtime to load and perform inference with those models.

Author

Himanshu Upreti

Posted on

2023-08-06

Updated on

2023-08-20

Licensed under

Comments