Optimization Group


General information

In the Oberseminar, members and guests of the working group deliver presentations on various topics of mathematical optimization. Guests are always welcome.

Time & place

on Tuesdays
at 17:15
in 48-208

Oberseminar in the summer semester 2024

The following presentations will take place during the summer semester 2024:

Abstract:
We consider a fashion discounter supplying its many branches with integral multiples from a set of available lot-types. The lot-type design problem (LDP) [Gaul et al. 2010] asks: which (integral) multiples of which (integral) lot-types (assortments of sizes) should be supplied to a set of branches in order to meet a (fractional) expected demand as closely as possible? There is a compact LDP-model; however, its integral gap is so large that it cannot be solved for most practical instances. On the other hand, the tightest LDP-model known so far [Gaul et al. 2010] can have billions of variables. (For 12 different sizes, reasonable for lingerie or children’s clothing, there are 1,159,533,584 different lot-types, if we assume at most 5 items of each size and a total number of items in a lot-type between 12 and 30.) Thus, not for all instances the tight model can be fed to a computer statically. We show how the tight model, which can be interpreted as a Dantzig-Wolfe decomposition (a standard-reformulation to tighten ILP-models) of the compact model, can be solved by Augmented Subproblem Grinding, which is a new Branch-and-Cut-and -Price variant, well-suited for tight models. It enforces a binary branch-and-price tree. One branch consists of a single promising subproblem that is solved immediately ("ground off"). The other branch is tightened by a new constraint ("augmented") that excludes all solutions consisting of only known columns (a non-standard reformulation). The theoretical core component is the identification of a characteristic lifting of dual variables in the reduced master problem for all the constraints that would emerge with new variables.
Computational results on real-world instances as well as on randomized stress-tests show that for the tight LDP-model Augmented Subproblem Grinding speeds up the solution by a factor of more than 100 on average compared to the solution of the static model and solves instances (up to 9,867,114,720,320 variables) that cannot be solved statically at all.

The lecture will take place (jointly with the Felix Klein Colloquium) on Thursday, April 25, 2024, at 17:15 in room 48/210.

Oberseminar in the winter semester 2023/24

The following presentations will take place during the winter semester 2023/24:

Abstract:
The timetable is the heart of a public transportation system. It does not only communicate the service to customers, but also has a large impact on the outcome of subsequent cost-sensitive planning steps, e.g., vehicle and crew scheduling. Moreover, it is often necessary to modify timetables to different extents due to planned construction sites, so that timetabling is a frequent and fundamental planning task. Many public transportation networks are operated in a periodic manner. It is therefore of interest to design and to optimize periodic timetables. The main mathematical model for this purpose is the Periodic Event Scheduling Problem (PESP). We will present several recent successes in periodic timetabling:

  • Geometric Methods:  We will outline tropical neighborhood search, a local heuristic based on the geometric properties of the space of feasible timetables, which proved to be valuable for PESP benchmark instances. Moreover, we will demonstrate a connection between Benders decomposition for PESP and the zonotope generated by feasible cycle offsets.
  • Cutting Planes: Good dual bounds for PESP are notoriously hard to obtain. We investigate the split/MIR closure of the cycle-based MIP formulation of PESP, and show that it is given by so-called flip inequalities. We further discuss how split cuts can be separated in theory and practice, and that the split closure is stable with respect to reformulating the MIP.
  • Integrated Turn-Sensitive Infrastructure-Aware PESP: Although PESP is quite versatile, it has a limited capability of modeling safety distances, so that conflict-free timetables cannot be guaranteed. This can be resolved easily when dwelling times are rather short, but fails when trains occupy parts of the infrastructure for comparably long times, e.g., when turning. We show how to integrate conflict-freeness into PESP by track choice, which automatically integrates aspects of line planning and vehicle scheduling into PESP as well. As a showcase, we demonstrate the practicality of this approach for real-world construction sites on the S-Bahn Berlin network.

The lecture will take place at the usual time though in the KOMMS room (building 48, 5th floor).

Oberseminar in the summer semester 2023

The following presentations will take place during the 2023 summer semester:

Abstract:
Among a large number of mathematical models that have been developed to address the evacuation problems, the network flows are the central and widely used operational research models in existence. These models are not only important in case of emergency periods, but also essential to address the traffic congestion in everyday rush office hours. An effective solution of them would be very helpful to explore their applicability in practice supporting the SDG goals in the disaster prone countries. This talk will be focused on the overview and insights of the results with objectives of time minimization and flow maximization. Examples will be illustrated to clarify the obtained results.

Abstract:
By the seminal work of Bertsimas & Sim in 2003, we can solve robust combinatorial optimization problems under budget uncertainty in theory by either solving n+1 deterministic problems or a compact LP reformulation introducing only additional continuous variables. Hence, the computational complexity does not increase. In practice however, computation times can increase significantly in comparison to the deterministic instance. In this talk, two approaches to speed-up the solving of such problems are presented. On the one hand, we derive a tailored branch-and-bound algorithm. On the other hand, we show how valid inequalities for the deterministic problem can be recycled to obtain new valid inequalities for the robust problem. This is joint work with Timo Gersing and Christina Büsing.

The lecture will take place at the usual time, however, in room 48/210 as it as part of the department's Felix Klein Colloquium.

Abstract:
We regard a natural extension of Hamiltonian graphs: removing a Hamiltonian cycle from a cubic graph leaves a perfect matching. Conversely, removing a perfect matching \(M\) from a cubic graph \(G\) leaves a disjoint union of cycles. Contracting these cycles yields a new graph \(G_M\). The graph G is star-like if \(G_M\) is a star for some perfect matching \(M\), making Hamiltonian graphs star-like. We extend the technique used to prove that Hamiltonian graphs satisfy the 3-decomposition conjecture to show that 3-connected star-like graphs satisfy it as well. Finally, we quickly look at the additional challenges that arise when we attempt to generalise the result to tree-like graphs.

Oberseminar in the winter semester 2022/23

The following presentations will take place during the 2022/23 winter semester:

Abstract:
In this talk we shall begin by discussing the paper manufacturing process. Followed by this we shall have an overview of various optimization problems that arise from this paper industries. Then we shall restrict ourselves to a specific optimization problem called deckle and trim loss optimization, which is well known as cutting stock problem.

About the speaker:
Dr. A Chandrashekaran did his PhD from Indian Institute of Technology Madras, India in the year 2010. He has two years of Department of Science and Technology, India post doctoral fellowship at the C R Rao Insitute, Hyderabad, India. He is presently an Assistant Professor of Mathematics at the Central University of Tamil Nadu, Thiruvarur, India. His research interests include Optimization, Mathematical Programming and Game Theory.

The lecture will take place at 10:00 a.m. and in room 14/420.

Abstract:
In multi-objective optimization (MOO), the goal is to optimize several objectives for a problem at the same time. As a consequence, not one single solution is simply "optimal" anymore. We look for the set of Pareto-optimal solutions instead. One specific subproblem of MOO is Tri-Objective Linear Programming (TOLP). In 2022, the first algorithm with polynomial delay for extreme points of TOLP was presented by Fritz Bökler. This presentation is about a master's thesis in which this algorithm is implemented and developed further. TriPoD is the name of the resulting software and stands for Tri-Objective Polynomial Delay. As the thesis is still ongoing, many practical results are intermediary. The first part of the presentation gives an introduction into MOO and Bökler's new algorithm. Then, we look at the state of the art for practical TOLP solving and compare the performance of the new algorithm to established solvers. This leads us to a refinement of the algorithm: We trade polynomial delay for polynomial space. The polynomial space algorithm introduces the opportunity to parallelize the algorithm. As no current state-of-the-art solver is able to efficiently use parallel computing, this might open completely new possibilities in the future.

Abstract:
We introduce Min Cost \(q\)-Multitype Multiset Multicover. Informally speaking, the goal of the latter is to find a packing of locations such that the region demands can be covered in any scenario without over-stressing the doctor capacities. For the aforementioned problem we develop a dynamic programming algorithm step by step. Moreover, we relate it to the One Plan project for which we also present some new heuristics.

Abstract:
We study multi-modal route planning in a network and present an overview of state-of-the-art algorithms. We sketch the historical development of these algorithms (including preprocessing techniques), explain the algorithmic ideas, and report about running times of implementations of these algorithms. Finally, we discuss current research topics in this area.

Abstract:
We consider the fundamental scheduling problem of minimizing the sum of weighted completion times on a single machine in the non-clairvoyant setting, i.e., with unknown processing times. In this setting a common assumption is that jobs can be arbitrarily preempted and resumed later without any overhead. I will start my talk with a proof of the 2-competitiveness of the Weighted Round-Robin Algorithm, a classical result of Kim and Chwa (2003). Then I will give an overview of our new results. First, the 2-competitiveness can be generalized to the setting where jobs arrive online over time. Second, we study the alternative model in which jobs cannot be suspended but only killed and restarted later. For this setting, we performed a tight analysis of the natural scaling strategy and a randomized variant, resulting in competitive ratios of ≈6.197 and ≈3.032, respectively. This strategy can be extended to jobs arriving online as well as scheduling unit-weight jobs on identical parallel machines, for which upper bounds < 10 on the competitive ratio are obtained. This is joint work with Guillaume Sagnol, Daniel Schmidt genannt Waldschmidt, and Philipp Warode from TU Berlin.

See also: https://arxiv.org/abs/2211.02044

Abstract:
A major bottleneck for exact solution methods of multiobjective optimization problems is that the cardinalities of the set of nondominated images and the set of efficient solutions may be exponentially large or even infinite. This strongly motivates the approximation of multiobjective optimization problems - a concept to substantially reduce the number of required solutions while still obtaining a provable solution quality. Here, it is sufficient to find a set of (not necessarily efficient) solutions that, for each possible image, contains a solution whose image is component-wise at least as good up to a multiplicative constant. Scalarizations are frequently used in approximation methods for multiobjective optimization problems. For example, it has been shown that optimal solutions of the weighted sum scalarization can be used to obtain approximation sets for each instance of each multiobjective minimization problem. However, these approximation results crucially rely on the assumption that all objectives are to be minimized. In fact, it is known that, for the weighted sum scalarization as well as for every other scalarization studied hitherto in view of approximation, even the set of all the optimal solutions of the scalarization does, in general, not constitute an approximation set in the case of maximization problems.

This raises several questions: Are there intrinsic structural differences between minimization and maximization problems with respect to approximation via scalarizations? Does there exist a scalarization such that, in each instance of each maximization problems, optimal solutions of the scalarization constitute an approximation set? What structural properties are necessary for scalarizations to be useful concerning the approximation of multiobjective optimization problems in general?

In this talk, we tackle exactly these questions and study the power of scalarizations for the approximation of multiobjective optimization problems from a general point of view.

Abstract:
Anytime algorithms, which allow a practitioner to trade-off execution time for solution quality, are of particular interest in multi-objective optimization where it is often infeasible to identify all efficient solutions. In this talk, we will discuss a theoretical model that, under some mild assumptions, characterizes the ideal trade-off between execution time and solution quality, measured in terms of hypervolume, of anytime algorithms for bi-objective optimization that collect (efficient) solutions sequentially. We consider this ideal trafe-off under the assumption that the ideal goal of such a sequential algorithm is to maximize the hypervolume each time a solution is collected. We validate our model on benchmark knapsack problems against an oracle that has complete knowledge of the non-dominated set. The empirical results suggest that our theoretical model approximates the behavior of this optimal model quite well. We also show that our model can be used to guide an epsilon-constraint algorithm in order to achieve close to ideal anytime performance.

Go to top