Research Area A: Fundamentals
Although many scalarization methods have been proposed, the literature on a systematic study of the structure of the parameter sets utilized in scalarizations and their interrelations to the set of Pareto optimal solutions is scarce. So far, this idea has mainly been pursued for the weighted sum and the Tchebycheff scalarization. In the former case, the parameter set decomposes into polyhedral subsets with a one-to-one correspondence to images of the convex hull of the set of Pareto optimal solutions. In the latter case, each Pareto image corresponds to a star-shaped parameter set which is a union of polytopes intersecting in at least one point. In both cases, knowledge about the structure of parameter sets led to new solution algorithms.
This idea of analyzing parameter sets has not yet been pursued for other scalarizations, which is to be done systematically within this project. First, the parameter sets of p-norm based scalarizations applied to discrete MOPs and finite representations of nonlinear MOPs is investigated. This links the already established properties for p = 1 and p = ∞ and, therefore, expands existing results for the weighted sum and the Tchebycheff method into a more general framework. Further, other scalarization techniques (augmented norm-based methods, Benson’s method, hybrid methods, etc.) are to be investigated analogously. As a result, a quasi-standard for relating parameter sets of scalarizations to Pareto sets is defined. Given these findings, new exact and approximate algorithms are designed based on several different scalarizations.
For large-scale and costly nonlinear MOPs, good global optimization algorithms require many functional evaluations and probability-based convergence results do not allow for algorithms with controllable quality. In contrast, local optimization algorithms use information about derivatives to efficiently converge to an optimum which, however, is a local optimum. We combine both methods with the goal of merging their advantages. Despite some existing preliminary work, a systematical study and evaluation of hybridization in the context of scalarization is still missing. In a first step, the general design of such hybrid scalarization methods for nonlinear MOPs is investigated. In particular, the stage at which a scalarization technique is applied is essential and different alternatives will be systematically explored. Moreover, most scalarization techniques work with the help of additional constraints. So, second, approaches from global optimization that are capable of dealing with additional constraints are extended to and analyzed for hybrid scalarizations. An analogous transfer also applies to local optimization methods. Especially, for advanced methods like so-called one-shot methods, we build on our previous results for equality constraints and establish convergence properties for inequality constraints. Besides these theoretical works, our innovative hybrid algorithms are benchmarked using the hypervolume indicator and other spread metrics.
The goal of this project is to devise MO approximation algorithms for combinatorial MOPs which do not rely on the iterative resolution of scalarization problems and which are applicable, among others, in the circumstances mentioned above. In contrast to the existing approximation methods, our algorithmic approach will use structural properties of the set of feasible solutions. The principle idea is to “pivot” from one nondominated point to another, as is e. g. done in MO simplex algorithms. However, for combinatorial MOPs, such a pivot corresponds to applying a basic combinatorial operation: For example, it is well-known that inserting a non-tree edge into a spanning tree and deleting an edge from the resulting fundamental cycle leads to a neighboring, a so-called adjacent spanning tree. Such combinatorial adjacency concepts exist for many combinatorial problems (e. g. adjacent flows differ by single cycles in the residual network and adjacent knapsacks by replacing one item by another) and the corresponding operations are computationally cheap. So far, little attention has been paid to the quality of the subset of nondominated points obtained in such neighborhood based algorithms. In this project, we aim at closing this research gap and devide approximation algorithms based on neighborhood pivots.
In contrast to descent strategies applied to scalarized MOPs, descent methods in a Pareto sense in objective space are able to better steer the search for Pareto solutions. These descent methods are well applicable for hybrid methods and continuation strategies that search along a connected Pareto optimal set and are better suited to problems that involve high dimensional feasible sets like e. g. problems that arise in machine learning. However, the use of these descent strategies is often limited due to the need for second-order derivative information. The use of algorithmic differentiation (AD) packages, like our tool CoDiPack, provides consistent and machine-accurate derivative information of first and second order. Here, we include concepts of AD in existing MO descent strategies to allow for applications, where derivative information is otherwise not easily accessible. Based on this, directed search strategies for MOPs with locally Lipschitz continuous objectives, which do not exist up to date, are developed and analyzed. This involves the computation of appropriate generalized gradients using AD and the MO extension of existing methods from SO non-differentiable optimization. The resulting methods are then applied to mainstream machine learning applications. Challenges such as the multiplicity of generalizations of the derivative for locally Lipschitz functions, the computation, storage, and aggregation of generalized gradients, and the determination of a descent directions from generalized subdifferentials complicate this task.
We consider (non-)linear mixed integer MOPs with the following property: For any arbitrary fixed assignment of integer variables, we assume the nondominated set of the remaining (non-)linear MOP to be connected and convex. Additionally, for each of these partial problems, the set of feasible solutions is assumed to be convex. We call the resulting nondominated set of these partial problems a Pareto-patch associated to the corresponding integral assignment. Furthermore, we assume that any such Pareto-patch consists of more than one point and, in particular, we exclude the case of purely integer MOPs. Such convex Pareto-patches can be approximated by (simplicial) sandwiching. In this general approach, inner and outer approximations of the Pareto-patch are computed iteratively based on representative solutions on the individual Pareto-patches. These representative solutions may, for instance, be obtained by solving scalarizations. Due to convexity of the feasible set, these representative solutions can be convex combined in order to establish an inner approximation consisting of feasible (and in general dominated) points but with a well-known and controlled distance to the Pareto-patch. Outer approximations can be obtained in several ways, e. g. as union of level sets of the scalarization problems solved. The outer approximations can used to bound from above the distance from the (feasible) points in the inner approximation to the unknown Pareto-patch. The challenge in this project is to find algorithmic strategies to identify fully or partially dominated Pareto-patches as early as possible. Dominated Pareto-patches are then to be removed from consideration, while partially dominated Pareto-patches need to be cut down to their non-dominated parts, ideally without computing many dominated representatives.
Quantum computation (QC) is an emerging technology that has the potential to outperform classical computers and might therefore be especially interesting in the context of MOO. In this project, we investigate the potential of QC for solving MOPs. We start with biobjective versions of well researched combinatorial optimization problems such as the exact cover problem and the knapsack problem. A known way to solve combinatorial optimization problems by quantum computers is to transform their formulation into a QUBO (quadratic unconstrained boolean optimization problem) form for which quantum algorithms exist. We will extend the QUBO formulation from for various scalarizations. The goal is to analyze which scalarizations are suited best for QC. For Quantum annealers, this especially includes the embedding of the models in the corresponding hardware graphs. We also aim to investigate whether the Quantum approximate optimization algorithm (QAOA) can be extended to the MO context without explicitly using scalarizations. The resulting approaches should be compared theoretically and experimentally (using the access to Quantum computers or Quantum simulators provided by Fraunhofer ITWM). Additionally, we plan to use the inherent stochastic properties of quantum computers for studying uncertain versions of the aforementioned problems.
Research Area B: Data
In robust optimization, there is a variety of concepts for data uncertainty such as strict robustness, light or regret robustness or recovery robustness. The concept of recovery robustness was developed for SOO and due to its flexibility has proven to be very promising in real-world applications, among others in timetabling, timetable information scheduling, in the airline industry or in classic combinatorial problems such as shortest paths or knapsack. However, recovery robustness has not been investigated for robust MOO so far. In this project, we define and analyze recovery robustness for MOO. A solution to an uncertain MOP is called recovery robust efficient if it can be moved to an almost efficient solution nearby when the realized scenario becomes known. Different combinatorial neighborhood relations will be investigated for specifying which solutions are “nearby”. Appropriate definitions of “almost efficient” will be given, closely related to approximate efficient solutions. Based on these specifications, algorithmic concepts for computing recovery robust efficient solutions are to be developed. One approach is to compute the intersection of approximated solution sets for a family of MOPs, each MOP belonging to a scenario which may realize. Scalarization techniques need to be extended since recovery to the efficient set is more flexible than recovery to an optimal solution for one fixed scalarization. A remedy is to adapt the scalarization parameters dynamically. Another challenge is to transfer the concept of recovery-to-optimality to the MO setting. Finally, applications of these results in the chemical process industry and in robust traffic planning provide an excellent completion.
While robust optimization assumes the set of scenarios to be known a priori, decision makers in real-world applications have only a vague idea of the scenarios that may occur and bounding this set of scenarios is not precise. However, exact tolerances for the outcome of an optimization problem are often given and must not be exceeded in the solution process. Thus, the propagation of uncertainties in the model parameters to prediction errors in the objectives is insufficient. Here, MOPs with uncertain model parameters shall be considered. For the Pareto optimal solutions of the nominal case, tolerances in the objective space are assumed to be given and matching scenarios yielding Pareto sets within this bounds are sought, requiring numerically efficient and adaptive scenario generation techniques. In some of our previous work, MO models describing chemical production plants have been investigated, including uncertainties. These methods start with tolerances in decision space, analyzing their impact on objective space. In this project, a novel approach, namely an inverse quantification of scenarios representing the model uncertainties, is investigated. Assuming a certain allowable degree of tolerance of the set of Pareto optimal solutions in objective space, the corresponding maximum allowable range of uncertain model parameters is quantified, which satisfies the allowable degree of tolerance in the objective space using adaptive discretization schemes. Then, using the insights obtained, concepts that return parameter ranges for a predefined tolerance in the objective space are investigated. All results are to be tested on models describing chemical production plants but are applicable to any model with unknown model parameters.
A goal of our research is to show that MOO is an important technique also for other mathematical disciplines. Robust optimization deals with SOO under data uncertainty. Given an uncertainty set of different scenarios, most robustness concepts look for a solution which is best in the worst-case scenario. It has been shown that one way to tackle robust optimization is to interpret each of the scenarios as an objective function. The MO counterpart of a SO robust optimization problem is hence a deterministic MOP with a huge number of objective functions. For infinite uncertainty sets, which are common in applications, we obtain a vector-valued problem. In this project, we explore this relationship. On the theoretical level, we transfer properties from robust SOO to MOO and vice versa. The MOPs stemming from robust optimization have very many objective functions, hence, a first focus is to reduce the number of objective functions in cooperation with project B.6. As an example, in robust optimization it is known that (under mild conditions) it suffices to use the extreme points of the convex hull of the uncertainty set to reduce even infinite uncertainty sets to a small number of scenarios. Transferring this result to MOPs shows that convex combinations of objective functions need not be considered in MOO. Algorithmically, for non-finite scenario sets, both deterministic and probabilistic approximation approaches of the scenario set itself can be considered. Therefore, we use MO approximation methods to design novel concepts and methods for approximating robust SOO problems. Approximations of infinite scenario sets with a manageable number of scenarios (constant, polylogarithmic, polynomial) and their associated quality criteria are investigated. Further, solution procedures are developed and analyzed with respect to their theoretical and practical running time. The capability of such algorithms may be improved by slicing of both Pareto and scenario sets or using suitable scalarization techniques among others. We expect the results to be fruitful for both disciplines, robust optimization and MO optimization.
Online optimization deals with problems where the given information is incomplete, which is in direct contrast to the offline case. Thus, decisions made by online algorithms have to be evaluated in terms of their quality. In case of SOPs, the well-known concept of competitive analysis, which compares the worst-case online solution with the optimal offline solution, can be exploited: if – for a minimization problem – the objective value of the solution found by an online algorithm is at least c times the objective value of the optimal offline solution, the algorithm is called c-competitive. However, in case of MOPs, in the presence of multiple objectives, the notion of MO-competitiveness is not unique and has to the best of our knowledge not been sufficiently studied in the literature. Thus, this project aims at closing this gap and developing fundamental models and algorithms for MO online optimization problems. Several approaches are pursued: First, scalarization techniques for offline problems, although not easily applicable without further ado, offer promising approaches, which are to be evaluated. Moreover, quality measures for the solution of these algorithms are evolved: Important results on approximating the Pareto set of offline problems exist already. However, online optimization problems can include an a priori partitioning of the solution space linked with a simultaneous solution of restricted subproblems. Thus, the transfer of these approximations is a challenging subject. Additionally, it is investigated, whether and to which extend solutions of different evaluation steps can be characterized such that a prediction about the competitiveness of the algorithm can be made.
The presence of multiple, conflicting objective functions is characteristic for MOO and is one reason for the difficulty of such problems. However, sometimes these objectives are not all strictly conflicting, e. g. in public transport planning, the travel time, the ticket price, and the number of transfers are often positively correlated. This observation has been used in metaheuristic but not in exact a posteriori approaches. We fill this research gap by replacing ’similar’ objectives with representative but fewer objectives, solving the associated smaller problem using exact or approximate methods, and reinterpreting the obtained Pareto solutions in the context of the original problem. Clearly, this approach involves loss in terms of Pareto points found, and the key question is how well one can control that loss. First, we define different notions of similarity of objectives and evaluate means of representing several objectives by one. In particular, we derive bounds on the objective function ranges in which Pareto images may be ignored by this approach. Second, the Pareto solutions obtained may serve as a representation of the original Pareto set and, therefore, we evaluate its quality theoretically but also empirically for test cases. Third, we utilize approximation algorithms for the Pareto set of the smaller problem and analyze the loss in approximation guarantee when transforming the solutions obtained back to the original problem. Fourth, objective function reduction techniques based on machine learning, in particular statistical classification, are developed and evaluated.
Game theory is concerned with predicting the outcome in situations when multiple agents pursue their own selfish interests. Such situations are modeled as strategic games, and the goal is to characterize their equilibrium states, in which no player has an incentive to deviate from her strategy given the other players' strategies. In this project we focus on games in which users have to choose among available resources and are charged a cost depending on the set of all players choosing the same resource. In particular, we will study routing games, in which players choose a path in a network, and scheduling games, where players choose a machine. Classical measures used to assess the efficiency of equilibria in such games are the price of stability and the price of anarchy.
It is usually assumed that the players' costs are scalars (e.g. their travel time). However, in practice, players may want to minimize multiple conflicting objectives (e.g. travel time and energy consumption). In this project, we will study equilibrium concepts that explicitly take this into account, allowing to deal with uncertain player preferences. It is reasonable to assume that a situation will arise where no player can unilaterally deviate from her strategy to improve one objective function without worsening another. The implied equilibrium concept is known as Pareto equilibrium. We will investigate which properties of SO routing and scheduling games can be generalized. This includes existence and uniqueness properties as well as MO analogues of the prices of stability and anarchy.
Research Area C: Applications
In microelectronic system design, the CPU speed improves at a significantly higher rate than the memory bandwidth due to multi-core architectures. Many applications, e. g., neural networks, therefore hit the so-called "memory wall", i. e., their execution speed is restricted by the memory bandwidth rather than by the CPU speed. In addition, today’s main memories, Dynamic Random Access Memories (DRAMs), contribute considerably to the energy demand. Ideally, one aims at efficient DRAMs with high bandwidth, low latency, and low energy consumption. In this regard, the memory controller plays a central role as it maps and schedules incoming inquiries to concrete memory accesses. Frequently occurring “row misses", “read-write switches", and a bad scheduling of refresh commands during this mapping have a negative effect on the performance of the DRAM. The corresponding impact can be simulated. Up to now, research on improving the DRAM’s performance criteria was mainly focused improving only one of the above mentioned negative effects. In this project, we aim at conducting a full MO analysis to find trade-offs between the different performance criteria and to optimize the DRAM controller. To this end, three different approaches for the optimization will be pursued with varying degree of availability of data: First, we consider an offline MOP with a deterministic memory access pattern known in advance. The results achieved serve as a benchmark for the following approaches. Second, we consider the problem with a probabilistic memory access pattern given by a Markov chain. Third, the problem is considered as an online MOP, where the memory access pattern is completely unknown. It is also studied how this three-step setup of varying degree of availabilities carries over to an effective treatment of online MOPs in other disciplines.
In our research on molecular modeling, we have discovered regularities in topologies of lowdimensional Pareto sets of continuous MOPs which we believe to be characteristic for many practical MOPs. Recently, we have explained these regularities and discussed the conditions for their occurrence. However, these investigations are based on few examples and a generalization for MOPs with parameter sets and objective spaces of different dimensions is still lacking. Furthermore, the hypothesis that these observed regularities are characteristic for many engineering MOPs needs to be explored by studying examples from different fields. Our findings also suggest that the regularities can be used for obtaining estimates of the Pareto set from solutions of the individual SOPs, which would be highly interesting and needs to be explored further. In the present project, we address these issues based on a variety of MOPs from engineering thermodynamics and chemical process engineering. In particular, we study the application of MOO in thermodynamic modeling of materials, where the objectives are different properties of the materials, which compete as they cannot all be represented equally well. We also study MOO of chemical processes, where the competing objectives are design features, such as the energy consumption or the environmental impact and the cost of the measures to reduce them.
Engineering systems are usually described using physical variables. However, the resulting relationships between input and output are often complex and difficult to explore through experiments or numerical simulation which is troublesome for optimizations. The complexity can be addressed by using so-called dimensionless numbers as variables which reduces the number of variables and also the computational difficulty of the problem substantially. Further, any solution of a problem in dimensionless numbers inherently provides information on an infinite number of solutions in physical variables. This is particularly attractive for MOO, where using dimensionless variables enables the swift exploration of Pareto sets on subsets of the parameter sets in physical variables. We have recently demonstrated this in an MOO study that aimed at the development of improved molecular models of water, based on previous results from a SOO study. In this project, we will explore the use of dimensionless numbers in practical MOPs from engineering thermodynamics and fluid process engineering and use this experience for establishing a theory that guides practical applications of the concept. The results are also important more generally, as any computer simulation is internally based on numbers, which carry no dimensions and can therefore be interpreted as dimensionless numbers.
Mathematical public transport planning deals with various subproblems such as stop location, line planning, timetabling, vehicle scheduling and delay management. Thereby, two competing objectives are prevalent, specifically maximizing passenger convenience and minimizing operational costs. Traditionally, two simplifications are introduced to reduce the problem size: First, each subproblem is solved separately and in a sequential manner. Second, only one, possibly aggregated objective is considered. However, this leads to a not necessarily Pareto optimal solution, which may be arbitrarily poor due to the sequentiality. While MO variants for (subsets of) individual subproblems have recently been studied, an approach that simultaneously considers all subproblems and all objectives in a nonscalarized way still needs to be developed. This project aims at analyzing the benefits of an integrated MO approach for public transport planning. In order to generalize the price of sequentiality measuring the gap between integrated and sequential solution for SOPs, the gap between two Pareto sets has to be assessed. To handle the size of the integrated problem, the focus is on finding a sparse representation of the Pareto set by specialized heuristics such as the sequential ones of the eigenmodel. In this setting, practical aspects such as uncertain data and robustness can be considered as well. As MOPs consisting of multiple sequential subproblems are not limited to public transport but e. g. also occur in process planning and scheduling, a more general solution approach for integrated MOPs based on is then considered resulting in a theory for coping with general MO integrated systems.
Clinical radiotherapy planning is inherently an MOP: maximization of the probability to control the tumor vs. minimization of complication probabilities in organs at risk. In the last 20 years, cooperation of Fraunhofer ITWM with Heidelberg University and Harvard Medical School (Boston) has brought up new planning paradigms in the clinic based on the approximation of Pareto optimal treatment plans. Companies like Raysearch Labs (Stockholm) and Varian Medical Systems (Palo Alto) offer commercial planning systems that reach out to the radiotherapy clinics worldwide. Based on our prior work, we aim at exploring the mathematics of a next generation of advanced treatment plans. To this end, two radiation modalities are mixed for a more individualized and better treatment of one patient. This is modeled as a MO mixed-integer program. In particular, treatment time can be significantly reduced by the more flexible combination of several treatment machines and linear superposition of radiation. Mathematically, such mixtures can be described as follows. Each single radiotherapy modality is associated with a set of Pareto optimal solutions providing a variety of therapy plans for a patient. Mixing of plans means that one Pareto optimal solution is taken from each of the modalities preferred and convex-combined over time in an optimal way. The mathematical task of this project is to understand how composition principles yield better decisions in a MO model. Efficient algorithms for computing relevant Pareto solutions for the modalities are designed. To this end, it is important to understand how to combine quality measures of radiotherapy with quality measures of patient scheduling like time to treatment or machine coverage. It is expected that findings can be transferred to other applications satisfying superposition assumptions.