|Institution:||University of Washington|
|Full text PDF:||http://hdl.handle.net/1773/40930|
Convex optimization is more popular than ever, with extensive applications in statistics, machine learning, and engineering. Nesterov introduced optimal first-order methods for large scale convex optimization in the 1980s, and extremely fast interior point methods for small-to-medium scale convex optimization emerged in the 1990s. Today there is little reason to prefer modelling with linear programming over convex programming for computational reasons. Nonetheless, there is room to improve the already sophisticated algorithms for convex optimization. The thesis makes three primary contributions to convex optimization. First, the thesis develops new, near optimal barriers for generalized power cones. This is relevant because the performance of interior point methods depends on representing convex sets with small parameter barriers. Second, the thesis introduces an intuitive, first-order method that achieves the best theoretical convergence rate and has better performance in practice than Nesterovs method. The thesis concludes with a framework for reformulating a convex program by interchanging the objective function and a constraint function. The approach is illustrated on several examples.Advisors/Committee Members: Drusvyatskiy, Dmitriy (advisor).