Image based depth extraction has been a long-studiedproblem. Classical methods rely on the geometry of two or more views observingthe same scene. Depth extraction from a single view has also been studied,where geometric constraints that required prior geometrical knowledge, e.g.,parallelism or known object dimensions were employed. Nonetheless, when naturalscenes are involved, depth extraction from a single view becomes a challenge.
To facilitate depth extraction from a single view,this research proposes a novel neural networks based approach suited fornatural environments. To do so, we explore the effectiveness of common lossfunctions, and design a network suited for the problem. We also demonstrate thebi-modal nature of the depth values in the landscape scenario, and howutilization of this aspect improves the estimation of depth. We demonstrate ourmodel on standard datasets and on a new one we generated. Results showimprovement on existing state of the art results and predictive capabilities forup to 2500 meters.