# AbstractsMathematics

by Johannes Ulén

Institution: University of Lund 2014 Regularization; Computer Vision; Segmentation; Dense Stereo; Mathematics and Statistics 1338461 http://lup.lub.lu.se/record/4777619http://lup.lub.lu.se/record/4777619/file/4862875.pdf

## Abstract

At the core of many computer vision models lies the minimization of an objective function consisting of a sum of functions with few arguments. The order of the objective function is defined as the highest number of arguments of any summand. To reduce ambiguity and noise in the solution, regularization terms are included into the objective function, enforcing different properties of the solution. The most commonly used regularization is penalization of boundary length, which requires a second-order objective function. Most of this thesis is devoted to introducing higher-order regularization terms and presenting efficient minimization schemes. One of the topics of the thesis covers a reformulation of a large class of discrete functions into an equivalent form. The reformulation is shown, both in theory and practical experiments, to be advantageous for higher-order regularization models based on curvature and second-order derivatives. Another topic is the parametric max-flow problem. An analysis is given, showing its inherent limitations for large-scale problems which are common in computer vision. The thesis also introduces a segmentation approach for finding thin and elongated structures in 3D volumes. Using a line-graph formulation, it is shown how to efficiently regularize with respect to higher-order differential geometric properties such as curvature and torsion. Furthermore, an efficient optimization approach for a multi-region model is presented which, in addition to standard regularization, is able to enforce geometric constraints such as inclusion or exclusion of different regions. The final part of the thesis deals with dense stereo estimation. A new regularization model is introduced, penalizing the second-order derivatives of a depth or disparity map. Compared to previous second-order approaches to dense stereo estimation, the new regularization model is shown to be more easily optimized. Inom många områden, exempelvis inom sjukvården, blir det allt vanligare med situationer där stora mängder av bilder behöver analyseras. Ofta kan dessa bildmängder vara så stora att det tar för lång tid för en människa att gå igenom dem. Till exempel kan man med hjälp av datortomografi ta tredimensionella röntgenbilder av människor, något som görs på sjukhus idag. Att gå igenom dessa bilder manuellt tar givetvis massor av resurser från läkare, resurser som skulle kunna användas bättre. Ett av målen med datorseende är att automatisera analysen av stora bildmängder. Ett av de första stegen när man analyserar en bild automatiskt är ofta att rita ut gränser mellan olika objekt, att segmentera bilden i olika områden. När man ska göra en sådan segmentering så brukar man bygga upp en statistisk modell som säger att varje pixel i bilden med en viss sannolikhet tillhör ett visst objekt, exempelvis en lunga eller benmärgen. Segmenteringen väljs sedan så att varje pixel får tillhörighet till det objekt som har högst sannolikhet i just den pixeln. Problemet är dock att varken modellen eller bilderna är…