Decentralized algorithms are useful for solving large-scale complex optimization problems, which not only alleviate the single-point resource bottleneck problem of centralized algorithms, but also possess higher scalability. Decentralized Optimization in Networks: Algorithmic Efficiency and Privacy Preservation provides the reader with theoretical foundations, practical guidance, and problem-solving approaches to decentralized optimization. It teaches how to apply decentralized optimization algorithms to improve optimization efficiency (communication efficiency, computational efficiency, fast convergence), solve large-scale problems (training for large-scale datasets), achieve privacy preservation (effectively counter external eavesdropping attacks, differential attacks, etc), and overcome a range of challenges in complex decentralized network environments (random sleep, random link failures, time-varying, directed, etc). It focuses on: 1) communication-efficiency: event-triggered communication, random link failures, zeroth-order gradients. 2) computation-efficiency: variance-reduction, Polyak’s projection, stochastic gradient, random sleep. 3) privacy preservation: differential privacy, edge-based correlated perturbations, conditional noises. It uses simulation results, including practical application examples, to illustrate the effectiveness and the practicability of decentralized optimization algorithms.