Many important network design problems are fundamentally combinatorial optimization problems. A large number of such problems, however, cannot readily be tackled by distributed algorithms. We propose a Markov approximation technique for synthesizing distributed algorithms for network combinatorial problems. We show that when using the log-sum-exp function to approximate the optimal value of any combinatorial problem, we end up with a solution that can be interpreted as the stationary probability distribution of a class of time-reversible Markov chains. Selected Markov chains among this class, or their carefully-perturbed versions, yield distributed algorithms that solve the log-sum-exp approximated problem. By case studies, we illustrate that Markov approximation technique not only can provide fresh perspective to existing distributed solutions, but also can help us generate new distributed algorithms in other problem domains with provable performance. We believe the Markov approximation techniques will find application in many problem domains, and this work serves as a call for participation.
This talk is based on a series of joint works with Lian Lu, Jinlong Tu, Mohammad Hajiesmaili from The Chinese University of Hong Kong, Chi-Kin Chau from the Masdar Istitute of Technology, Zhao Xu from The Hong Kong Polytechnic University, Xiaojun Lin from Purdue University, and Longbo Huang from Tsinghua University.
Minghua Chen received his Ph.D. degree from the Department of Electrical Engineering and Computer Sciences at University of California at Berkeley in 2006. He spent one year visiting Microsoft Research Redmond as a Postdoc Researcher. He joined the Department of Information Engineering, the Chinese University of Hong Kong, in 2007, where he is now an Associate Professor. He is also currently an Adjunct Associate Professor in Tsinghua University, Institute of Interdisciplinary Information Sciences.