Research Area A: Fundamentals
Although many scalarization methods have been proposed, the literature on a systematic study of the structure of the parameter sets utilized in scalarizations and their interrelations to the set of Pareto optimal solutions is scarce. So far, this idea has mainly been pursued for the weighted sum and the Tchebycheff scalarization. In the former case, the parameter set decomposes into polyhedral subsets with a one-to-one correspondence to images of the convex hull of the set of Pareto optimal solutions. In the latter case, each Pareto image corresponds to a star-shaped parameter set which is a union of polytopes intersecting in at least one point. In both cases, knowledge about the structure of parameter sets led to new solution algorithms.
This idea of analyzing parameter sets has not yet been pursued for other scalarizations, which is to be done systematically within this project. First, the parameter sets of p-norm based scalarizations applied to discrete MOPs and finite representations of nonlinear MOPs is investigated. This links the already established properties for p = 1 and p = ∞ and, therefore, expands existing results for the weighted sum and the Tchebycheff method into a more general framework. Further, other scalarization techniques (augmented norm-based methods, Benson’s method, hybrid methods, etc.) are to be investigated analogously. As a result, a quasi-standard for relating parameter sets of scalarizations to Pareto sets is defined. Given these findings, new exact and approximate algorithms are designed based on several different scalarizations.
For large-scale and costly nonlinear MOPs, good global optimization algorithms require many functional evaluations and probability-based convergence results do not allow for algorithms with controllable quality. In contrast, local optimization algorithms use information about derivatives to efficiently converge to an optimum which, however, is a local optimum. We combine both methods with the goal of merging their advantages. Despite some existing preliminary work, a systematical study and evaluation of hybridization in the context of scalarization is still missing. In a first step, the general design of such hybrid scalarization methods for nonlinear MOPs is investigated. In particular, the stage at which a scalarization technique is applied is essential and different alternatives will be systematically explored. Moreover, most scalarization techniques work with the help of additional constraints. So, second, approaches from global optimization that are capable of dealing with additional constraints are extended to and analyzed for hybrid scalarizations. An analogous transfer also applies to local optimization methods. Especially, for advanced methods like so-called one-shot methods, we build on our previous results for equality constraints and establish convergence properties for inequality constraints. Besides these theoretical works, our innovative hybrid algorithms are benchmarked using the hypervolume indicator and other spread metrics.
Research Area B: Data
While robust optimization assumes the set of scenarios to be known a priori, decision makers in real-world applications have only a vague idea of the scenarios that may occur and bounding this set of scenarios is not precise. However, exact tolerances for the outcome of an optimization problem are often given and must not be exceeded in the solution process. Thus, the propagation of uncertainties in the model parameters to prediction errors in the objectives is insufficient. Here, MOPs with uncertain model parameters shall be considered. For the Pareto optimal solutions of the nominal case, tolerances in the objective space are assumed to be given and matching scenarios yielding Pareto sets within this bounds are sought, requiring numerically efficient and adaptive scenario generation techniques. In some of our previous work, MO models describing chemical production plants have been investigated, including uncertainties. These methods start with tolerances in decision space, analyzing their impact on objective space. In this project, a novel approach, namely an inverse quantification of scenarios representing the model uncertainties, is investigated. Assuming a certain allowable degree of tolerance of the set of Pareto optimal solutions in objective space, the corresponding maximum allowable range of uncertain model parameters is quantified, which satisfies the allowable degree of tolerance in the objective space using adaptive discretization schemes. Then, using the insights obtained, concepts that return parameter ranges for a predefined tolerance in the objective space are investigated. All results are to be tested on models describing chemical production plants but are applicable to any model with unknown model parameters.
Online optimization deals with problems where the given information is incomplete, which is in direct contrast to the offline case. Thus, decisions made by online algorithms have to be evaluated in terms of their quality. In case of SOPs, the well-known concept of competitive analysis, which compares the worst-case online solution with the optimal offline solution, can be exploited: if – for a minimization problem – the objective value of the solution found by an online algorithm is at least c times the objective value of the optimal offline solution, the algorithm is called c-competitive. However, in case of MOPs, in the presence of multiple objectives, the notion of MO-competitiveness is not unique and has to the best of our knowledge not been sufficiently studied in the literature. Thus, this project aims at closing this gap and developing fundamental models and algorithms for MO online optimization problems. Several approaches are pursued: First, scalarization techniques for offline problems, although not easily applicable without further ado, offer promising approaches, which are to be evaluated. Moreover, quality measures for the solution of these algorithms are evolved: Important results on approximating the Pareto set of offline problems exist already. However, online optimization problems can include an a priori partitioning of the solution space linked with a simultaneous solution of restricted subproblems. Thus, the transfer of these approximations is a challenging subject. Additionally, it is investigated, whether and to which extend solutions of different evaluation steps can be characterized such that a prediction about the competitiveness of the algorithm can be made.
The presence of multiple, conflicting objective functions is characteristic for MOO and is one reason for the difficulty of such problems. However, sometimes these objectives are not all strictly conflicting, e. g. in public transport planning, the travel time, the ticket price, and the number of transfers are often positively correlated. This observation has been used in metaheuristic but not in exact a posteriori approaches. We fill this research gap by replacing ’similar’ objectives with representative but fewer objectives, solving the associated smaller problem using exact or approximate methods, and reinterpreting the obtained Pareto solutions in the context of the original problem. Clearly, this approach involves loss in terms of Pareto points found, and the key question is how well one can control that loss. First, we define different notions of similarity of objectives and evaluate means of representing several objectives by one. In particular, we derive bounds on the objective function ranges in which Pareto images may be ignored by this approach. Second, the Pareto solutions obtained may serve as a representation of the original Pareto set and, therefore, we evaluate its quality theoretically but also empirically for test cases. Third, we utilize approximation algorithms for the Pareto set of the smaller problem and analyze the loss in approximation guarantee when transforming the solutions obtained back to the original problem. Fourth, objective function reduction techniques based on machine learning, in particular statistical classification, are developed and evaluated.
Game theory is concerned with predicting the outcome in situations when multiple agents pursue their own selfish interests. Such situations are modeled as strategic games, and the goal is to characterize their equilibrium states, in which no player has an incentive to deviate from her strategy given the other players' strategies. In this project we focus on games in which users have to choose among available resources and are charged a cost depending on the set of all players choosing the same resource. In particular, we will study routing games, in which players choose a path in a network, and scheduling games, where players choose a machine. Classical measures used to assess the efficiency of equilibria in such games are the price of stability and the price of anarchy.
It is usually assumed that the players' costs are scalars (e.g. their travel time). However, in practice, players may want to minimize multiple conflicting objectives (e.g. travel time and energy consumption). In this project, we will study equilibrium concepts that explicitly take this into account, allowing to deal with uncertain player preferences. It is reasonable to assume that a situation will arise where no player can unilaterally deviate from her strategy to improve one objective function without worsening another. The implied equilibrium concept is known as Pareto equilibrium. We will investigate which properties of SO routing and scheduling games can be generalized. This includes existence and uniqueness properties as well as MO analogues of the prices of stability and anarchy.
Research Area C: Applications
In microelectronic system design, the CPU speed improves at a significantly higher rate than the memory bandwidth due to multi-core architectures. Many applications, e. g., neural networks, therefore hit the so-called "memory wall", i. e., their execution speed is restricted by the memory bandwidth rather than by the CPU speed. In addition, today’s main memories, Dynamic Random Access Memories (DRAMs), contribute considerably to the energy demand. Ideally, one aims at efficient DRAMs with high bandwidth, low latency, and low energy consumption. In this regard, the memory controller plays a central role as it maps and schedules incoming inquiries to concrete memory accesses. Frequently occurring “row misses", “read-write switches", and a bad scheduling of refresh commands during this mapping have a negative effect on the performance of the DRAM. The corresponding impact can be simulated. Up to now, research on improving the DRAM’s performance criteria was mainly focused improving only one of the above mentioned negative effects. In this project, we aim at conducting a full MO analysis to find trade-offs between the different performance criteria and to optimize the DRAM controller. To this end, three different approaches for the optimization will be pursued with varying degree of availability of data: First, we consider an offline MOP with a deterministic memory access pattern known in advance. The results achieved serve as a benchmark for the following approaches. Second, we consider the problem with a probabilistic memory access pattern given by a Markov chain. Third, the problem is considered as an online MOP, where the memory access pattern is completely unknown. It is also studied how this three-step setup of varying degree of availabilities carries over to an effective treatment of online MOPs in other disciplines.
In our research on molecular modeling, we have discovered regularities in topologies of lowdimensional Pareto sets of continuous MOPs which we believe to be characteristic for many practical MOPs. Recently, we have explained these regularities and discussed the conditions for their occurrence. However, these investigations are based on few examples and a generalization for MOPs with parameter sets and objective spaces of different dimensions is still lacking. Furthermore, the hypothesis that these observed regularities are characteristic for many engineering MOPs needs to be explored by studying examples from different fields. Our findings also suggest that the regularities can be used for obtaining estimates of the Pareto set from solutions of the individual SOPs, which would be highly interesting and needs to be explored further. In the present project, we address these issues based on a variety of MOPs from engineering thermodynamics and chemical process engineering. In particular, we study the application of MOO in thermodynamic modeling of materials, where the objectives are different properties of the materials, which compete as they cannot all be represented equally well. We also study MOO of chemical processes, where the competing objectives are design features, such as the energy consumption or the environmental impact and the cost of the measures to reduce them.
Mathematical public transport planning deals with various subproblems such as stop location, line planning, timetabling, vehicle scheduling and delay management. Thereby, two competing objectives are prevalent, specifically maximizing passenger convenience and minimizing operational costs. Traditionally, two simplifications are introduced to reduce the problem size: First, each subproblem is solved separately and in a sequential manner. Second, only one, possibly aggregated objective is considered. However, this leads to a not necessarily Pareto optimal solution, which may be arbitrarily poor due to the sequentiality. While MO variants for (subsets of) individual subproblems have recently been studied, an approach that simultaneously considers all subproblems and all objectives in a nonscalarized way still needs to be developed. This project aims at analyzing the benefits of an integrated MO approach for public transport planning. In order to generalize the price of sequentiality measuring the gap between integrated and sequential solution for SOPs, the gap between two Pareto sets has to be assessed. To handle the size of the integrated problem, the focus is on finding a sparse representation of the Pareto set by specialized heuristics such as the sequential ones of the eigenmodel. In this setting, practical aspects such as uncertain data and robustness can be considered as well. As MOPs consisting of multiple sequential subproblems are not limited to public transport but e. g. also occur in process planning and scheduling, a more general solution approach for integrated MOPs based on is then considered resulting in a theory for coping with general MO integrated systems.
Clinical radiotherapy planning is inherently an MOP: maximization of the probability to control the tumor vs. minimization of complication probabilities in organs at risk. In the last 20 years, cooperation of Fraunhofer ITWM with Heidelberg University and Harvard Medical School (Boston) has brought up new planning paradigms in the clinic based on the approximation of Pareto optimal treatment plans. Companies like Raysearch Labs (Stockholm) and Varian Medical Systems (Palo Alto) offer commercial planning systems that reach out to the radiotherapy clinics worldwide. Based on our prior work, we aim at exploring the mathematics of a next generation of advanced treatment plans. To this end, two radiation modalities are mixed for a more individualized and better treatment of one patient. This is modeled as a MO mixed-integer program. In particular, treatment time can be significantly reduced by the more flexible combination of several treatment machines and linear superposition of radiation. Mathematically, such mixtures can be described as follows. Each single radiotherapy modality is associated with a set of Pareto optimal solutions providing a variety of therapy plans for a patient. Mixing of plans means that one Pareto optimal solution is taken from each of the modalities preferred and convex-combined over time in an optimal way. The mathematical task of this project is to understand how composition principles yield better decisions in a MO model. Efficient algorithms for computing relevant Pareto solutions for the modalities are designed. To this end, it is important to understand how to combine quality measures of radiotherapy with quality measures of patient scheduling like time to treatment or machine coverage. It is expected that findings can be transferred to other applications satisfying superposition assumptions.