Where can i find "specific use case documentation for Add custom imputs"

Hi,
I am trying Arm ML embedded evaluation kit.
I use the inference runner(inference runner code example),and successfully add my custom model and inference it on FVP.
So the next step,I want to use my own data and let the output image display on the MPS3 LCD(I am trying super-resolution model)
I want to follow this document,but I can’t find where it is.


Can somebody tell me where can I find this document.
Thanks a lot !

Hi there,

Great to hear that you have successfully been able to use the embedded evaluation kit so far. For your particular question the inference runner is set up to use random input data and so this section doesn’t exist for the inference runner. Other use cases will have this section (e.g. image classification: Image Classification Code Sample). You can build the inference runner with dynamic loading capability to feed in your own custom data and save the output from inference (see: Inference Runner Code Sample)

However, from the sounds of what you are wanting to do, the easiest way to accomplish it is by modifying one of the existing use cases or making your own use case based on an existing one. The image classification use case is likely the best one to start modifying.

A basic outline of what you would need to change would be:

  1. Switch to your custom model like you have done with the inference runner (see: Image Classification Code Sample)
  2. Switch to your custom image data (see: Image Classification Code Sample)
  3. Make TFLite Micro aware of any layers in your custom model by modifying source/application/api/use_case/img_class/src/MobileNetModel.cc - ml/ethos-u/ml-embedded-evaluation-kit - Gitiles (you might need to change this line: source/application/api/use_case/img_class/include/MobileNetModel.hpp - ml/ethos-u/ml-embedded-evaluation-kit - Gitiles if you add more than the current number of operations used)
  4. Modify and pre/post processing if your model needs something different from what image classification uses (see: source/application/api/use_case/img_class/src/ImgClassProcessing.cc - ml/ethos-u/ml-embedded-evaluation-kit - Gitiles)
  5. Modify the ClassifyImageHandler function in the UseCaseHandler (source/use_case/img_class/src/UseCaseHandler.cc - ml/ethos-u/ml-embedded-evaluation-kit - Gitiles)
    to display your output.
    You can just reuse the function that displays the input image (see: source/use_case/img_class/src/UseCaseHandler.cc - ml/ethos-u/ml-embedded-evaluation-kit - Gitiles) and use the output tensor if no post processing needs to happen.

This section of the documentation goes into detail how to add a new use case which might also be useful as a reference: Implementing custom ML application

If you need any have questions about the steps above or need any more guidance just let me know.

Hope this helps,
Richard

1 Like

Hi, thanks for your guideline!
I successfully display my output on LCD now.
But there is difference between that displayed on FVP LCD and on MPS3 FPGA LCD.
I follow this step(deployment on MPS3 board)
The output on FVP(in the red circle) is the correct. But when I displyed it on MPS3(in the red circle), the half output was wrong. It seems that output becomes int8 type from uint8 type.

Is there some option that i need to change when i deploy on MPS3 ?

Great to see the instructions helped you get things displaying on the LCD!

It seems like an odd issue you are having there with it working correctly on the FVP and but not on MPS3, if it works on the FVP it should work exactly the same on the MPS3. You shouldn’t need to change anything when you deploy on MPS3.

Perhaps something is corrupting the memory between inference completion and displaying. Are you directly displaying the tlfite output tensor or do you copy it somewhere first before displaying? We will try to see if we can replicate the issue here locally to assist further.

Hi, thanks for your fast reply

I try the two ways to display the ouput

  1. I directly display the tflite output tensor.
    image
  2. I display the output that i copy.
    image

Both ways have the same problem.

By the way, I modify “ImgClassPreProcess” function in ImgClassProcessing.cc.
I call this function twice to preprocessing input tensor and postprocessing output tensor.
I guess it may be the reason, so I am trying to modify it.

Hi,

We verified that the FVP and FPGA show the same behaviour for INT8 formatted images.


int8_cat_on_fvp_20220706A

The issue might be somewhere else. The FVP can sometimes be more forgiving for data corruption as it has more underlying memory where the FPGA has stricter limitations.

1 Like