Become a leader in the IoT community!
Join our community of embedded and IoT practitioners to contribute experience, learn new skills and collaborate with other developers with complementary skillsets.
Join our community of embedded and IoT practitioners to contribute experience, learn new skills and collaborate with other developers with complementary skillsets.
I’m deploying a fine-tuned Mask R-CNN model (ResNet-101) on a Raspberry Pi 4 (4GB RAM) running Raspbian OS, using TensorFlow Lite v2.6 for aerial object detection. During inference, I get the following error:
ValueError: TensorFlow Lite currently supports models with fixed-size input tensors, but the model has dynamic-sized input tensors. Expected input shape: [1, 1024, 1024, 3], but received input shape: [1, 512, 512, 3].
Despite resizing all input images to 1024×1024, the model still expects dynamic input shapes. Here’s the code for loading and running inference:
import tensorflow as tf
import numpy as np
from PIL import Image
interpreter = tf.lite.Interpreter(model_path="mask_rcnn_model.tflite")
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
image = Image.open('aerial_image.jpg').resize((1024, 1024))
input_data = np.expand_dims(np.array(image), axis=0).astype(np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
I’m also facing significant inference delays (~10 seconds per image). I attempted post-training quantization using float16 and int8, but the performance remains suboptimal.
What strategies can I use to fix the dynamic tensor error, optimize inference speed on Raspberry Pi, and improve detection accuracy for small objects in aerial imagery?
Hi, the essuie was fixed after i switched to YOLOv4 , maybe because it’s lighter model compared to R-CNN and more suitable for raspberry pi4
CONTRIBUTE TO THIS THREAD