In a constrained optimization problem arising in practice, the objective or constraint functions depend on model parameters. These parameters are often learned from data. The learning procedure however can only estimate the true parameters. It can be shown that using nominal estimated values of the parameters can lead to solutions of the optimization problem that might be far from the optimum for the original problem with true parameter. The objective of this project is to develop a framework to account for parameter uncertainties in the optimization.
Our past work considered least-squares parameter estimation and used a bootstrap technique to account for the distribution of the uncertain parameter. It then formulated a distributionally robust optimization problem to compute a solution to the optimization problem that is robust to the uncertain parameters. This project builds upon our past work to generalize the results to the case in which there is correlation in the estimation procedure. Here, a challenge is to develop appropriate bootstrap technique and to formulate a suitable distributionally robust problem to
capture the uncertainty. The project requires strong background and interest in optimization and probability/statistics. The student is also able to advance her/his skills in these topics. The project is mainly theoretical and
proof-based. There will be case studies and verifications using python. There is possibility of doing this as a semester or a Master’s project. To determine if you have interest, you may read the following paper and check out some of its references: https://arxiv.org/pdf/2112.13932.pdf
To apply, send an email containing 1. one paragraph on your background and fit for the project, 2. your BS and MS transcripts to [email protected].
The students who have suitable
track record will be contacted.