High-resolution images provide more information for better post-processing such as detection, recognition, segmentation, identification, and visualization. The need for high-resolution images occurs in health care, where the physician needs a high-quality image of the patient in order to make better decisions or to perform surgery. Breath-holding MRI scanners have high-acquisition speed and collect a large number of low-quality frames; similarly, some surveillance cameras have low acquisition speed and collect low quality images, due to the storage space restrictions or limited network bandwidth to transfer the data. It is hard to perform satisfactory image processing on low-quality images. Image reconstruction models, e.g., multi-frame fusion, and single image super-resolution, have been successfully used in image processing and computer vision to improve the quality of the image. Many algorithms have been proposed to fuse multiple low-quality images in order to get a single high-resolution image, or to train the model on the training images and to use that model to improve the quality of the single input image. The goal of this dissertation is to study previous approaches related to image quality, find their limitations, and introduce new approaches to solve them. Since it is difficult to design a single algorithm that will work for all types of images, such as MRI images and images obtained by surveillance cameras, we divide the problem into sub-problems. We introduce new algorithms to address the following objectives: (i) to fuse multiple low-resolution frames acquired by an MRI scanner, (ii) to improve the quality of a single image by adding information from training images, and (iii) to perform better recognition when the input facial images have low-quality.