Publications

2023
item thumbnail

Chaos persists in large-scale multi-agent learning despite adaptive learning rates

In the realm of multi-agent learning, achieving equilibrium in self-play is a complex endeavor. While adaptive learning rates have improved convergence in small games, their effectiveness in large-population settings remains elusive. Our study uncovers that, even when implementing these dynamic rates, chaos still dominates large population congestion games.

TLDR: Despite the promise of adaptive learning rates, chaos remains prevalent in large multi-agent learning scenarios, even with standard algorithms and minimal strategies.
Toolbox: Chaos Theory, Discrete Dynamical Systems, Asymptotic Calculus

Emmanaouil-Vasileios Vlatakis-Gkaragkounis1, Lampros Flokas2, and Georgios Piliouras 3

Mathematics of Operations Research Journal (MOR-Submitted)

1/2/3. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier

[Presentation]

item thumbnail

Curvature-Independent Last-Iterate Convergence for Games on Riemannian Manifolds

In many machine learning applications, we aim to find a point of balance or equilibrium on curved spaces known as Riemannian manifolds. In this work, we make a significant advancement for the seminal Riemannian gradient descent (RGD) method. Our breakthrough, a first-of-its-kind, reveals that RGD converges to NE at a linear rate in strongly monotone settings, unaffected by distance distortions or the curvature of the space.

TLDR: Riemannian GD can compute NE with a step size that is agnostic to the underlying curvature.
Toolbox: Differential Geometry, Game Theory, Convex Optimization, Potential Analysis

Yang Cai*, Michael I Jordan*, Tianyi Lin*, Argyris Oikonomou* and Emmanaouil-Vasileios Vlatakis-Gkaragkounis*

Journal of Machine Learning Research (JMLR-Submitted)

*. Alphabetical order; Equal contribution

item thumbnail

Smoothed Complexity of SWAP in Local Graph Partitioning

Local search is a widely used method for tackling optimization problems. However, even though it often works well in real-world scenarios, theoretical evaluations sometimes suggest it could be very slow. This contradiction arises because these theoretical tests often involve unlikely, extreme cases. In this study, we discovered that when minor unpredictable changes (or "noise") are introduced to the problem input, algorithms like SWAP, 1-FLIP, and 2-FLIP efficiently handle a range of binary optimization challenges.

TLDR: Despite theoretical concerns, local search algorithms like SWAP and 2-FLIP handle provably real-world optimization tasks efficiently.
Toolbox: Smoothed Analysis, Applied Probability, Combinatorics, Graph Theory

Xi Chen*, Chenghao Guo*, Mihalis Yannakakis*, and Emmanaouil-Vasileios Vlatakis-Gkaragkounis*

35th ACM-SIAM Symposium on Discrete Algorithms (SODA 2024-Submitted)

*. Alphabetical order; Equal contribution

[Presentation]

item thumbnail

Exploiting Hidden Structures in Non-convex Games for Convergence to Nash Equilibrium

From adversarial attacks to multi-agent reinforcement learning, in vast majority of ML tasks, we're essentially playing multi-agent games striving for a Nash equilibrium. Beneath the surface of these complex non-convex games' landscapes often lies a simpler convex substructure. This research unveils that core using a pioneering method, the 'hidden gradient descent,' illuminating the hidden framework and nudging the game towards the Nash equilibrium.

TLDR: 'Hidden gradient descent': A novel method for computing Nash equilibrium in ML complex non-convex landscapes with hidden convex cores.
Toolbox: Game Theory, Convex Optimization, Dynamical Systems

Iosif Sakos1, Emmanaouil-Vasileios Vlatakis-Gkaragkounis1, Panayotis Mertikopoulos2 and Georgios Piliouras2

37th Conference on Neural Information Processing Systems (NeurIPS 2023)

1/2/3. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier

[Poster]

item thumbnail

Stochastic Methods in Variational Inequalities: Ergodicity, Bias and Refinements

In machine learning tasks, the Stochastic Extragradient (SEG) and Stochastic Gradient Descent Ascent (SGDA) algorithms have stood out in min-max optimization and variational inequalities (VIP). While constant step-size versions offer benefits like ease of tuning, their convergence patterns remain intricate. Through Markov Chains, our study provides pivotal insights establishing a first-of-its-kind Law of Large Numbers and the Central Limit Theorem, demonstrating their consistent performance. Finally, we show how to improve the solution accuracy in VIPs using Richardson-Romberg extrapolation.

TLDR: Stochastic EG and GDA algorithms are governed by the Law of Large Numbers and the Central Limit Theorem and their accuracy can be boosted using Richardson extrapolation.
Toolbox: Probability, Ergodic Theory, Markov Chains, Optimization, Dynamical Systems

Emmanaouil-Vasileios Vlatakis-Gkaragkounis1 , Angeliki Giannou2, Yudong Chen3 and Qiaomin Xie3

37th Conference on Neural Information Processing Systems (NeurIPS 2023)

1/2/3. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier

item thumbnail

Quadratic Speedup in Finding Nash Equilibria of Quantum Zero-Sum Games

Quantum game theory, which has ties to non-local games and quantum networks, has recently regained attention due to advancements in related fields and machine learning. One key goal is to find the best strategies for players, known as Nash equilibria in fully competitive settings. A 2008 algorithm by Jain and Watrous found these equilibria for quantum games at $O(1/\sqrt{T})$. Our new method, the Single-Call Matrix Mirror Prox (1-MMP), offers quadratic speedup, setting a new standard in the field.

TLDR: State-of-art method for Computing Nash Equilibria in Quantum Zero-Sum Games.
Toolbox: Quantum Computing, Game Theory, Convex Optimization

Francisca Vasconcelos1, Emmanaouil-Vasileios Vlatakis-Gkaragkounis1,
Panayotis Mertikopoulos2, Georgios Piliouras3, Umesh Vazirani3, and Michael I. Jordan3

7th Conference on Quantum Techniques in Machine Learning (QTML 2023)

1/2/3. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier

Oral talk on "Quantum Game Theory" thread (Top 1% Accepted Papers)
item thumbnail

The Computational Complexity of Multi-player Concave Games and Kakutani Fixed Points

Kakutani’s Fixed Point theorem is pivotal in game theory and economics, but its computational versions have been limited. This study offers a broad computational view of the theorem, showing it's PPAD-complete. It delves into multi-player concave games, highlighting the challenges of finding equilibriums, and sheds light on the Walrasian Equilibrium's complexity by proving an advanced version of Berge’s maximum theorem.

TLDR: Computing Kakutani's Fixpoints is PPAD-complete. Both Rosen-Nash Eq. in concave games and Walrasian Eq. in exchange markets share similar complexity for general utilities.
Toolbox: Computational Complexity, Game Theory, Convex Analysis

Christos Papadimitriou*, Manolis Zampetakis*, and Emmanaouil-Vasileios Vlatakis-Gkaragkounis*

24th ACM Conference on Economics and Computations (EC 2023)

*. Alphabetical order; Equal contribution

Submitted to Games and Economic Behavior Journal (GEB)

[Poster] | [Presentation] | [Recorded Talk]

item thumbnail

Algorithms and Complexity for Computing Nash Equilibria in Adversarial Team Games

Adversarial team games involve a team competing against a single opponent in a win-lose situation. In this research, we show that finding an approximate Nash equilibrium in these games is as complex as a category called continuous local search (CLS) by showing that the Moreau envelop of a suitable best response function acts as a potential under certain natural gradient-based dynamics.

TLDR: Computing Nash Equilibria in adversarial team games is CLS-complete.
Toolbox: Computational Complexity, Game Theory, Convex Analysis, Linear Programming

Fivos Kalogiannis*, Ioannis Anagnostides*, Ioannis Panageas*, and Emmanaouil-Vasileios Vlatakis-Gkaragkounis*

24th ACM Conference on Economics and Computations (EC 2023)

Student Leading Paper / *. Equal Contribution

Submitted to Games and Economic Behavior Journal (GEB)

[Presentation] | [Recorded Talk]


item thumbnail

Efficiently Computing Nash Equilibria in Adversarial Team Markov Games

In multi-agent reinforcement learning, predicting the optimal strategy or Nash equilibrium is crucial but challenging. In this research, we focus on games where a team, without coordinating among themselves, plays against a single opponent. Such a setup captures both competitive and cooperative elements. We present a new algorithm that can find near-optimal strategies for these games efficiently.

TLDR: In multi-agent learning, we introduce an efficient algorithm to find near-optimal strategies for teams playing against a single opponent, blending both competition and cooperation settings.
Toolbox: Reinforcement Learning, Game Theory, Non-Convex Programming

Fivos Kalogiannis1, Ioannis Anagnostides2, Ioannis Panageas2, Emmanaouil-Vasileios Vlatakis-Gkaragkounis2,
Vaggos Chatziafratis3, and Stelios Stavroulakis3

11th International Conference on Learning Representations (ICLR 2023Oral Talk)

1/2/3. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier

Oral talk on "Theory of Machine Learning" thread (Top 3% Accepted Papers)

[Poster] | [Presentation] | [Recorded Talk]

Teamwork makes von Neumann work

Teamwork makes von Neumann work: Min-Max Optimization in Two-Team Zero-Sum Games

In ML, two-team games have gained prominence, especially in contexts such as e-sports and multi-GANs. Such games involve teams that compete without internal coordination. Our research highlights that standard methods, including GDA, OGDA, and EG, fall short in identifying optimal strategies. We conclude by introducing novel game dynamics, drawing inspiration from control-theory, which effectively uncovers the optimal strategy in specific conditions.

TLDR: In e-sports and multi-GANs ML team games, conventional algorithms often underperform in identifying the most optimal strategies. We present a control-theory-derived approach, yielding superior results in specific scenarios.
Toolbox: Game Theory, Control Theory, Dynamical Systems

Fivos Kalogiannis*, Ioannis Panageas*, and Emmanaouil-Vasileios Vlatakis-Gkaragkounis*

11th International Conference on Learning Representations (ICLR 2023)

*. Alphabetical order; Equal contribution

[Poster] | [Presentation]

2022
First-Order Algorithms for Min-Max Optimization

First-Order Algorithms for Min-Max Optimization in Geodesic Metric Spaces

In machine learning, problems on curved spaces, or Riemannian manifolds, present a distinct challenge. Although solutions exist for challenges in flat spaces, those in curved spaces remain enigmatic. This research introduces techniques such as the Riemannian corrected extragradient (RCEG) method, which guarantees performance that mirrors its Euclidean counterpart.

TLDR: Addressing a century-old unsolved issue, this research reveals that Von Neumann's minimax points can be computed linearly in cases with geodesically strong convex-concave characteristics. The results match the Euclidean outcome using the Riemannian corrected extragradient (RCEG) method.
Toolbox: Differential Geometry, Game Theory, Convex Optimization, Duality Theory

Michael Jordan*, Tianyi Lin* and Emmanaouil-Vasileios Vlatakis-Gkaragkounis*

36th Conference on Neural Information Processing Systems (NeurIPS 2022-Oral Talk)

*. Alphabetical order; Equal contribution

Oral talk on "Optimization & Machine Learning" thread (Top 3% Accepted Papers)

[Poster] | [Presentation] | [Recorded Talk]

On the convergence of policy gradient methods to Nash equilibria

On the convergence of policy gradient methods to Nash equilibria in general stochastic games

A comprehensive framework for analyzing policy gradient methods, ranging from perfect policy gradients to estimates like the REINFORCE algorithm. This research shows that strategically stable Nash policies are likely to emerge locally. Moreover, Nash policies with a significant concave curvature can achieve a rapid squared distance rate of \(O(1/\sqrt{n})\). Remarkably, slight modifications allow for local convergence to deterministic Markov Nash equilibria in a fixed period, regardless of the game's inherent unpredictability.

TLDR: Local Convergence Guarantees of Policy Gradient Methods in a Multi-Agent RL environment.
Toolbox: Multi-Agent RL, Convex Optimization, Applied Probability, Dynamical Systems

Angeliki Giannou*, Kyriakos Lotidis*, Panayotis Mertikopoulos*, and Emmanaouil-Vasileios Vlatakis-Gkaragkounis*

36th Conference on Neural Information Processing Systems (NeurIPS 2022)

*. Alphabetical order; Equal contribution

[Poster] | [Presentation] | [Recorded Talk]

Near-Optimal Statistical Query Lower Bounds

Near-Optimal Statistical Query Lower Bounds for Agnostically Learning Intersections of Halfspaces with Gaussian Marginals

This study delves into the problem of understanding intersections of halfspaces under Gaussian distribution in the agnostic learning model. The contribution refines the Statistical Query (SQ) lower bounds to \(n^{-\Omega(\log k)}\), coming close to the best-known upper bounds. The approach also extends to SQ learning for other categories such as "convex subspace juntas" and sets defined by a limited Gaussian surface area.

TLDR: Defined the lower bounds for agnostically learning intersections of halfspaces with Gaussian distribution using SQ learning.
Toolbox: Boolean Learning Theory, Fourier Analysis, High-Dimensional Probability

Daniel Hsu*, Rocco Servedio*, Clayton Sanford*, and Emmanaouil-Vasileios Vlatakis-Gkaragkounis*

35th Annual Conference on Learning Theory (COLT 2022)

*. Alphabetical order; Equal contribution

[Presentation] | [Recorded Talk]

2021
item thumbnail

On the Rate of Convergence of Regularized Learning in Games: From Bandits and Uncertainty to Optimism and Beyond

This paper delves into the convergence properties of the "Follow the Generalized Leader" (FTGL) approach, integrating both traditional and optimistic no-regret strategies, towards achieving Nash Equilibria in $N$-player generic games. The findings underscore that the convergence rate of FTGL techniques to strict Nash equilibria remains independent of player uncertainties and is majorly influenced by the regularizer's geometry near the equilibrium.

TLDR: Exploration of local convergence rates for FTGL techniques in pursuit of Nash Equilibria in generic games.
Toolbox: Game Theory, Convex Optimization, Dynamical Systems, Applied Probability

Angeliki Giannou1, Emmanaouil-Vasileios Vlatakis-Gkaragkounis1, and Panayotis Mertikopoulos2

35th Conference on Neural Information Processing Systems (NeurIPS 2021)

1/2/3. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier

[Poster] | [Presentation] | [Recorded Talk]

Solving Min-Max Optimization with Hidden Structure via Gradient Descent Ascent

Solving Min-Max Optimization with Hidden Structure via Gradient Descent Ascent

This study delves into gradient descent ascent (GDA) dynamics in specialized non-convex non-concave zero-sum games, termed as "hidden zero-sum games." Players manipulate inputs of non-linear functions that channel into a convex-concave game. When the internal game possesses strict convex-concave characteristics, GDA gravitates towards its von-Neumann equilibrium. Otherwise, GDA might not converge, but leveraging certain techniques ensures its convergence. Uniquely, our convergence findings are non-local, which sets it apart in this domain. The research also draws a parallel between our continuous dynamics discoveries and the training of generative adversarial networks.

TLDR: Exploration of convergence criteria for GDA in hidden zero-sum games and drawing connections to the training methodology of generative adversarial networks.
Toolbox: Game Theory, Dynamical Systems, Lyapunov Potentials

Lampros Flokas1, Emmanaouil-Vasileios Vlatakis-Gkaragkounis1, and Georgios Piliouras1,

35th Conference on Neural Information Processing Systems (NeurIPS 2021)

1/2/3. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier

[Poster] | [Presentation] | [Recorded Talk]

On the Approximation Power of Two-Layer Networks of Random ReLUs

On the Approximation Power of Two-Layer Networks of Random ReLUs

This research navigates the approximation capacities of depth-two ReLU networks that kickstart with stochastic weights on the initial layer. With scrupulous scrutiny, our findings deliver nearly identical upper and lower thresholds for the \(L_2\) and Sobolev norms-approximation, taking into account various determinants such as the Lipschitz constant, desired accuracy, and dimensionality of the issue.

TLDR: Deciphering stringent upper-lower limits for two-layered ReLU networks, possessing random weights from the outset, in the portrayal of smooth functions.
Toolbox: Deep Learning Theory, Ridgelet Fourier Analysis, Applied Probability

Daniel Hsu*, Clayton Sanford*, Rocco Servedio*, and Emmanaouil-Vasileios Vlatakis-Gkaragkounis*

34th Annual Conference on Learning Theory (COLT 2021)

*. Alphabetical order; Equal contribution

[Poster] | [Presentation] | [Recorded Talk] | [Blog Post]

Reconstructing weighted voting schemes from partial information about their power indices

Reconstructing weighted voting schemes from partial information about their power indices

Computational Social Choice delves into dissecting various voting systems. While historical research aimed to infer voting structures from the weightage of each participant, this exploration pushes the envelope by grappling with circumstances where the complete influence of a voter is obfuscated. Tackling this, the study unveils two groundbreaking strategies rooted in Chow parameters and Shapley indices, paving the way to combat the challenges posed by incomplete data in the realm of voting mechanisms.

TLDR: Design of algorithms to decode voting infrastructures from fragments of data on voter influence.
Toolbox: Boolean Learning Theory, Fourier Analysis, Computational Social Choice

Huch Bennett*, Anindya De*, Rocco Servedio*, and Emmanaouil-Vasileios Vlatakis-Gkaragkounis*

34th Annual Conference on Learning Theory (COLT 2021)

*. Alphabetical order; Equal contribution

[Presentation] | [Recorded Talk]

Survival of the strictest: Stable and unstable equilibria under regularized learning with partial information

Survival of the strictest: Stable and unstable equilibria under regularized learning with partial information

Venturing into the intricacies of multi-player game mechanics, we scrutinize the Nash equilibrium's determinants. The paper places a profound emphasis on the "Follow the Regularized Leader" (FTRL) algorithm. Confronting a spectrum of uncertainties depending on feedback, we discern an unequivocal association between the stability and exclusiveness of a Nash equilibrium. Precisely, a Nash equilibrium retains stability and attraction under the FTRL paradigm with overwhelmingly high probability if it remains strictly unique.

TLDR: "Follow the Regularized Leader" (FTRL) algorithm reveals that a Nash equilibrium is stable and appealing with a significant probability only when it remains strictly unique.
Toolbox: Game Dynamics, Stochastic Stability Analysis, Dynamical Systems

Angeliki Giannou1, Emmanaouil-Vasileios Vlatakis-Gkaragkounis1, and Panayotis Mertikopoulos2

34th Annual Conference on Learning Theory (COLT 2021)

1/2/3. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier

[Poster] | [Presentation] | [Recorded Talk]

2020
No-regret learning and mixed Nash equilibria

No-regret learning and mixed Nash equilibria: They do not mix

In our study of N-player games, we explored the convergence patterns of no-regret learning dynamics towards Nash equilibria. While it's recognized that play frequency gravitates towards coarse correlated equilibria, the relationship with Nash equilibria remains ambiguous. We proved that using "follow-the-regularized-leader" dynamics, mixed Nash equilibria and no-regret learning are fundamentally misaligned – only strict Nash equilibria reliably surface as stable endpoints in such scenarios.

TLDR: Mixed NE are unstable under no-regret dynamics in any N-player games.
Toolbox: Game Theory, Dynamical Systems, Invariant Measures Analysis, Topology

Lampros Flokas1, Emmanaouil-Vasileios Vlatakis-Gkaragkounis1 Thanasis Lianeas2,
Panagiotis Mertikopoulos3 and Georgios Piliouras3

34th Conference on Neural Information Processing Systems (NeurIPS 2020-Spotlight talk)

1/2/3. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier

Spotlight talk on "Game Theory & Optimization" thread

[Poster] | [Presentation] | [Recorded Talk]

Optimal Private Median Estimation

Optimal Private Median Estimation under Minimal Distributional Assumptions

Estimating the median of a distribution from limited samples, while preserving data privacy, is a foundational problem in computer science. We address this problem even for distributions with minimal assumptions and potential outliers. We devise a time-efficient, privacy-preserving algorithm that achieves this optimal rate for median's estimation, leveraging a versatile Lipschitz Extension Lemma.

TLDR: Optimal Sample and Efficient algorithm for private median estimation.
Toolbox: Differential Privacy, Algorithmic Statistics, Dynamic Programming

Christos Tzamos*, Emmanaouil-Vasileios Vlatakis-Gkaragkounis* and Ilias Zadik*

34th Conference on Neural Information Processing Systems (NeurIPS 2020-Spotlight talk)

*. Alphabetical order; Equal contribution

Spotlight talk on "Security & Privacy" thread

[Poster] | [Presentation] | [Recorded Talk]

item thumbnail

Smoothed Complexity of Local Max-Cut and Binary Max-CSP

Local search is a widely used method for tackling optimization problems. However, even though it often works well in real-world scenarios, theoretical evaluations sometimes suggest it could be very slow. This contradiction arises because these theoretical tests often involve unlikely, extreme cases. In this study, we discovered that when minor unpredictable changes (or "noise") are introduced to the problem input, algorithms like FLIP methods efficiently handle a range of binary optimization challenges.

TLDR: Despite theoretical concerns, local search algorithms based on single flips handle provably real-world optimization tasks efficiently.
Toolbox: Smoothed Analysis, Applied Probability, Combinatorics, Graph Theory

Xi Chen*, Chenghao Guo*, Mihalis Yannakakis*, Emmanaouil-Vasileios Vlatakis-Gkaragkounis* and Xinzhi Zhang*

52nd Annual ACM Symposium on Theory of Computing (STOC 2020-Spotlight talk)

*. Alphabetical order; Equal contribution

[Presentation] [Video]

2019
Poincaré Recurrence, Cycles and Spurious Equilibria in Gradient-Descent-Ascent for Non-Convex Non-Concave Zero-Sum Games

Poincaré Recurrence, Cycles, and Spurious Equilibria in Gradient-Descent-Ascent for Non-Convex Non-Concave Zero-Sum Games

We analyze the behavior of standard Gradient Descent-Ascent dynamics in the context of hidden bilinear games. Players in these games control the inputs of a function tied to a bilinear zero-sum game, similar to the competition in Generative Adversarial Networks. Our findings reveal that GDA dynamics in these games might not always converge to the game-theoretically meaningful solution. Instead, they can display recurrent behaviors or even converge to spurious non-min-max equilibria.

TLDR: In several ML non-convex scenarios, standard GDA dynamics show behaviors that don't converge to expected game solutions, including periodic behavior and convergence to non-min-max equilibria.
Toolbox: GANs, Game Dynamics, Invariant Measures, Lyapunov Potentials, Topology

Lampros Flokas1, Emmanaouil-Vasileios Vlatakis-Gkaragkounis1 and Georgios Piliouras

33rd Conference on Neural Information Processing Systems (NeurIPS 2019-Spotlight talk)

1/2/3. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier

Spotlight talk on "Game Theory & Optimization" thread

[Poster] [Presentation] [Video]

item thumbnail

Efficiently avoiding saddle points with zero order methods: No gradients required

Pushing the boundaries of gradient-free optimization in machine learning, we studied methods that don't need gradient information, only function values. Designing a novel "zero-order method," we efficiently reach good solutions and avoid problematic saddle points. Our method matches the speed of traditional approaches without introducing unnecessary complexities seen in earlier approaches.

TLDR: A simple derivative-free method efficiently avoids saddle points.
Toolbox: Non-convex Optimization, Applied Probability, Dynamical Systems

Lampros Flokas1, Emmanaouil-Vasileios Vlatakis-Gkaragkounis1 and Georgios Piliouras2

33rd Conference on Neural Information Processing Systems (NeurIPS 2019)

1/2/3. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier

[Poster]


2018
item thumbnail

Pattern Search Multidimensional Scaling

Nonlinear manifold learning is crucial in ML for unveiling hidden data structures. We offer a new twist on the classic multi-dimensional scaling (MDS) method by utilizing derivative-free optimization techniques. Specifically, instead of the usual gradient descent, we sample and evaluate potential "moves" within a set sphere for every point in the embedded space.

TLDR: Derivative-free methods for multi-dimensional scaling via complex data structures.
Toolbox: Manifold Learning, Non-convex & Black-box Optimization, Local Search Heuristics

Georgios Paraskevopoulos1, Efthymios Tzinis1, Emmanouil-Vasileios Vlatakis-Gkaragkounis1 and Alexandros Potamianos2

Transactions of Machine Learning Research (TMLR-Submitted)

1/2/3. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier