I have an idea for a phone app i am considering developing, but need some input on whether it is possible or not. The app is relatively simple - using the camera, it will take a photo of a flat object, such as a book, and from the photo, calculate the object’s width and height.
I’ve done some research and understand trigonometry may be useful here. If the distance from the camera to the image can be calculated, then some basic image processing should find the borders of the object, and from there it’s width and height across it’s centre lines.
My issue comes with scaling - it should be able to detect, for arguments sake, that a book is 25cm x 15cm whether the book is held 30cm away from the camera or 100cm away from the camera.
I would say it is possible as we had a builder around and he did not measure anything up just took photos with his phone. When I spoke to him about it he said the phone would give him the sizes.
So before you spend a lot of time on the project I would check what is available first.
It sounds theoretically possible (but not easy) except for this:-
All I can think of for that would be getting data about the focus which I don’t think the cameras record.
Not sure if it may be possible through knowing the FOV then using parallax differences in multiple images, maybe not.
This is how Autodesk’s 123D Catch works to create 3D models. I have used that since its early development in Autodesk Labs, originally “Project Photofly”. But even that does not know scale, you have to manually create a reference scale by defining points of a known distance.