Distributed $k$-Clustering for Data with Heavy Noise
–Neural Information Processing Systems
In this paper, we consider the $k$-center/median/means clustering with outliers problems (or the $(k, z)$-center/median/means problems) in the distributed setting. Most previous distributed algorithms have their communication costs linearly depending on $z$, the number of outliers. Recently Guha et al.[10] overcame this dependence issue by considering bi-criteria approximation algorithms that output solutions with $2z$ outliers. For the case where $z$ is large, the extra $z$ outliers discarded by the algorithms might be too large, considering that the data gathering process might be costly. In this paper, we improve the number of outliers to the best possible $(1+\epsilon)z$, while maintaining the $O(1)$-approximation ratio and independence of communication cost on $z$. The problems we consider include the $(k, z)$-center problem, and $(k, z)$-median/means problems in Euclidean metrics. Implementation of the our algorithm for $(k, z)$-center shows that it outperforms many previous algorithms, both in terms of the communication cost and quality of the output solution.
Neural Information Processing Systems
Dec-31-2018
- Country:
- Asia > Afghanistan
- Parwan Province > Charikar (0.04)
- Europe > Spain
- Catalonia > Barcelona Province > Barcelona (0.04)
- North America
- Canada > Quebec
- Montreal (0.04)
- United States
- California
- Alameda County > Berkeley (0.04)
- Los Angeles County > Los Angeles (0.14)
- San Diego County > San Diego (0.04)
- San Francisco County > San Francisco (0.14)
- District of Columbia > Washington (0.04)
- Nevada (0.04)
- New York
- Bronx County > New York City (0.04)
- Erie County > Buffalo (0.04)
- Kings County > New York City (0.04)
- New York County > New York City (0.04)
- Queens County > New York City (0.04)
- Richmond County > New York City (0.04)
- Oregon > Multnomah County
- Portland (0.04)
- California
- Canada > Quebec
- Asia > Afghanistan
- Technology: