Work Address: Kaiserstraße 21, 66386 St. Ingbert (Germany)

My name is Barış Can Esmer. Currently, I am a Ph.D. student at Saarland University (and doctoral researcher at CISPA) advised by Dániel Marx.

Nowadays, my research interests are parameterized complexity, fine-grained lower bounds, and (parameterized) approximation algorithms, though I am interested in theoretical computer science in general.

I received my Master’s degree in Mathematics and Computer Science at Saarland University in 2021, and before that, I completed my undergraduate studies at Boğaziçi University, Turkey, in 2019.

In my spare time, I usually travel, go swimming, hit the gym, (try to) play the oud or play computer games.

Course Name: Randomized Algorithms and Probabilistic Analysis of Algorithms Role: Tutor Instructor: Philip Wellnitz Institution: Max Planck Institute for Informatics

The goal of this paper is to understand how exponential-time approximation algorithms can be obtained from existing polynomial-time approximation algorithms, existing parameterized exact algorithms, and existing parameterized approximation algorithms. More formally, we consider a monotone subset minimization problem over a universe of size n (e.g., VERTEX COVER or FEEDBACK VERTEX Set). We have access to an algorithm that finds an α-approximate solution in time ck ⋅ nO(1) if a solution of size k exists (and more generally, an extension algorithm that can approximate in a similar way if a set can be extended to a solution with k further elements). Our goal is to obtain a dn ⋅ nO(1) time β-approximation algorithm for the problem with d as small as possible. That is, for every fixed α,c,β ≥ 1, we would like to determine the smallest possible d that can be achieved in a model where our problem-specific knowledge is limited to checking the feasibility of a solution and invoking the α-approximate extension algorithm. Our results completely resolve this question: 1. For every fixed α, c, β ≥ 1, a simple algorithm (“approximate monotone local search”) achieves the optimum value of d. 2. Given α, c, β ≥ 1, we can efficiently compute the optimum d up to any precision \varepsilon > 0. Our technique gives novel results for a wide range of problems including FEEDBACK VERTEX Set, DIRECTED Feedback Vertex Set, Odd Cycle Traversal and Partial Vertex Cover. The monotone local search algorithm we use is a simple adaptation of [Fomin et al., J. ACM 2019, Esmer et al., ESA 2022, Gaspers and Lee, ICALP 2017]. Still, attaining the above results required us to frame the result in a different way, and overcome a major technical challenge. First, we introduce an oracle based computational model which allows for a simple derivation of lower bounds that, unexpectedly, show that the running time of the monotone local search algorithm is optimal. Second, while it easy to express the running time of the monotone local search algorithm in various forms, it is unclear how to actually numerically evaluate it for given values of α, β and c. We show how the running time of the algorithm can be evaluated via a convex analysis of a continuous max-min optimization problem, overcoming the limitations of previous approaches to the α = β case [Fomin et al., J. ACM 2019, Esmer et al., ESA 2022, Gaspers and Lee, ICALP 2017]. * The full version of the paper can be accessed at https://arxiv.org/abs/2306.15331. Research supported by the European Research Council (ERC) consolidator grant No. 725978 SYSTEMATICGRAPH.

Algorithmica

Computing Generalized Convolutions Faster Than Brute Force

Barış Can Esmer, Ariel Kulik, Dániel Marx, and 2 more authors

In this paper, we consider a general notion of convolution. Let \\D\\be a finite domain and let \\D^n\\be the set of n-length vectors (tuples) of \\D\\. Let \\f :D}times D}rightarrow D\\be a function and let \{}oplus _f\\be a coordinate-wise application of f. The \\f\\-Convolution of two functions \\g,h :D^n }rightarrow }{-M,}ldots ,M}}\\is \{}begin{aligned} (g }mathbin {}circledast _{f}}h)(}textbf{v}) {:}{=}}sum _{}begin{array}{c} }textbf{v}_g,}textbf{v}_h }in D^n}} }text {s.t. } }textbf{v}= }textbf{v}_g }oplus _f }textbf{v}_h }end{array}} g(}textbf{v}_g) }cdot h(}textbf{v}_h) }end{aligned}\\(g⊛fh)(v):=∑vg,vh∈Dns.t.v=vg⊕fvhg(vg)⋅h(vh)for every \{}textbf{v}}in D^n\\. This problem generalizes many fundamental convolutions such as Subset Convolution, XOR Product, Covering Product or Packing Product, etc. For arbitrary function f and domain \\D\\we can compute \\f\\-Convolution via brute-force enumeration in \{}widetilde{{}mathcal {O}}}(\textbarD\textbar^{2n} }cdot }textrm{polylog}(M))\\time. Our main result is an improvement over this naive algorithm. We show that \\f\\-Convolution can be computed exactly in \{}widetilde{{}mathcal {O}}}( (c }cdot \textbarD\textbar^2)^{n} }cdot }textrm{polylog}(M))\\for constant \\c {:}{=}3/4\\when \\D\\has even cardinality. Our main observation is that a cyclic partition of a function \\f :D}times D}rightarrow D\\can be used to speed up the computation of \\f\\-Convolution, and we show that an appropriate cyclic partition exists for every f. Furthermore, we demonstrate that a single entry of the \\f\\-Convolution can be computed more efficiently. In this variant, we are given two functions \\g,h :D^n }rightarrow }{-M,}ldots ,M}}\\alongside with a vector \{}textbf{v}}in D^n\\and the task of the \\f\\-Query problem is to compute integer \\(g }mathbin {}circledast _{f}}h)(}textbf{v})\\. This is a generalization of the well-known Orthogonal Vectors problem. We show that \\f\\-Query can be computed in \{}widetilde{{}mathcal {O}}}(\textbarD\textbar^{}frac{}omega }{2} n} }cdot }textrm{polylog}(M))\\time, where \{}omega }in [2,2.372)\\is the exponent of currently fastest matrix multiplication algorithm.

ESA 2022

Faster Exponential-Time Approximation Algorithms Using Approximate Monotone Local Search

Bariş Can Esmer, Ariel Kulik, Dániel Marx, and 2 more authors

In 30th Annual European Symposium on Algorithms (ESA 2022) , Jan 2022