Depth Estimation
Core ML

Convert Base / Large models to CoreML compatible

#7
by sean-o-sullivan - opened

Hi,

I’m trying to make an on-device 3D scanner from depth maps. I am trying to explore running the larger variations of the model (e.g. base) on my iphone XS.

I have gotten DepthAnythingV2Base running on my phone by using the conversion jupyter notebook and a slightly modified version of the coreml demo app. But the outputted depth maps appear as if they’re less sensitive than the versions of the SmallF32/16 models. I think I might be making some mistake in the conversion of the model. How should I be converting it?

I understand that it will run significantly slower but I’m ok with this. I prefer quality over speed. Ideally I want to try to run the large model on device.

I understand this is a detailed question, and I would appreciate any opinions or advice, even just a one or two word response.

Thank you,

Seán

sean-o-sullivan changed discussion title from Convert Base / Large to Convert Base / Large models to CoreML compatible

nvm I got it working. I will attach the files soon

I just had to change the scale values

Hello @sean-o-sullivan - I'm keen to get some scripts on how to convert other versions. Are you able to share your model and/or conversion steps?

Sign up or log in to comment