Nowadays, many ways to obtain the depth of an image exist. It can be done using several pictures, points of view or thanks to laser transmitter/receptor. Unfortunately, a simple picture of a classic camera cannot provide both an image and its depth. This project’s principle is to manage to retrieve both those information from a simple picture with one point of view. It is based on the fact that defaults are known.
Generally, default on pictures from camera are du to the objective aperture. With a aperture with known defects that depends on the position of the object captured in space, it is possible to have both an image and depth of the objects on it with only one picture.
This project is a simple implementation in C/C++ of the image processing algorithms of THIS article. The efficiency of the code was a real priority because this project was supposed to be run on a Zybo board thanks to (High Level Synthesis) link . A pictures database was available with Matlab code to unblurred them. Regarding depth extraction only mathematical algorithms were available. This is why with the time given for this project, the implementation focused on this part.
At the end of his project, the algorithms of depth extraction were working with all the constraints of High Level Synthesis (no dynamic allocation, no float, limited memory access). To do so, the image was treated in small windows and the Fixed-point arithmetic was used. This process takes about 10 minutes per image of 1758x1171 pixels. The result were not as precise as the searchers ones but considering a power of processing it was satisfying.
This project allowed me to develop the following skills :