Optimization · Convex Optimization

TitleAuthors
A Communication Efficient Stochastic Multi-Block Alternating Direction Method of MultipliersHao Yu
A First-Order Algorithmic Framework for Distributionally Robust Logistic RegressionJIAJIN LI · SEN HUANG · Anthony Man-Cho So
Acceleration via Symplectic Discretization of High-Resolution Differential EquationsBin Shi · Simon Du · Weijie Su · Michael Jordan
An Accelerated Decentralized Stochastic Proximal Algorithm for Finite SumsHadrien Hendrikx · Francis Bach · Laurent Massoulié
An adaptive Mirror-Prox method for variational inequalities with singular operatorsKimon Antonakopoulos · Veronica Belmega · Panayotis Mertikopoulos
Blended Matching PursuitCyrille Combettes · Sebastian Pokutta
Communication-Efficient Distributed Learning via Lazily Aggregated Quantized GradientsJun Sun · Tianyi Chen · Georgios Giannakis · Zaiyue Yang
Complexity of Highly Parallel Non-Smooth Convex OptimizationSebastien Bubeck · Qijia Jiang · Yin-Tat Lee · Yuanzhi Li · Aaron Sidford
Efficient Symmetric Norm Regression via Linear SketchingZhao Song · Ruosong Wang · Lin Yang · Hongyang Zhang · Peilin Zhong
General Proximal Incremental Aggregated Gradient Algorithms: Better and Novel Results under General SchemeTao Sun · Yuejiao Sun · Dongsheng Li · Qing Liao
Interior-Point Methods Strike Back: Solving the Wasserstein Barycenter ProblemDongDong Ge · Haoyue Wang · Zikai Xiong · Yinyu Ye
Necessary and Sufficient Geometries for Gradient MethodsDaniel Levy · John Duchi
On the Curved Geometry of Accelerated OptimizationAaron Defazio
Sinkhorn Barycenters with Free Support via Frank-Wolfe AlgorithmGiulia Luise · Saverio Salzo · Massimiliano Pontil · Carlo Ciliberto
Tight Dimension Independent Lower Bound on the Expected Convergence Rate for Diminishing Step Sizes in SGDPHUONG_HA NGUYEN · Lam Nguyen · Marten van Dijk
Trajectory of Alternating Direction Method of Multipliers and Adaptive AccelerationClarice Poon · Jingwei Liang
A Generic Acceleration Framework for Stochastic Composite OptimizationAndrei Kulunchakov · Julien Mairal
A unified variance-reduced accelerated gradient method for convex optimizationGuanghui Lan · Zhize Li · Yi Zhou
Accelerating Rescaled Gradient Descent: Fast Optimization of Smooth FunctionsAshia Wilson · Lester Mackey · Andre Wibisono
Communication trade-offs for Local-SGD with large step sizeAymeric Dieuleveut · Kumar Kshitij Patel
Convergence-Rate-Matching Discretization of Accelerated Optimization Flows Through Opportunistic State-Triggered ControlMiguel Vaquero · Jorge Cortes
Decentralized sketching of low rank matricesRakshith Sharma Srinivasa · Kiryung Lee · Marius Junge · Justin Romberg
Differentiable Convex Optimization LayersAkshay Agrawal · Brandon Amos · Shane Barratt · Stephen Boyd · Steven Diamond · J. Zico Kolter
Dimension-Free Bounds for Low-Precision TrainingZheng Li · Christopher De Sa
Fast and Accurate Stochastic Gradient EstimationBeidi Chen · Yingchen Xu · Anshumali Shrivastava
Fast, Provably convergent IRLS Algorithm for p-norm Linear RegressionDeeksha Adil · Richard Peng · Sushant Sachdeva
Hamiltonian descent for composite objectivesBrendan O'Donoghue · Chris J. Maddison
High-Dimensional Optimization in Adaptive Random SubspacesJonathan Lacotte · Mert Pilanci · Marco Pavone
Optimal Stochastic and Online Learning with Individual IteratesYunwen Lei · Peng Yang · Ke Tang · Ding-Xuan Zhou
Primal-Dual Block Generalized Frank-WolfeQi Lei · JIACHENG ZHUO · Constantine Caramanis · Inderjit S Dhillon · Alexandros Dimakis
Stochastic Frank-Wolfe for Composite Convex MinimizationFrancesco Locatello · Alp Yurtsever · Olivier Fercoq · Volkan Cevher
Stochastic Variance Reduced Primal Dual Algorithms for Empirical Composition OptimizationAdithya M Devraj · Jianshu Chen