Polarization is implicated in the erosion of democracy and the progression to violence, which makes the polarization properties of large algorithmic content selection systems (recommender systems) a matter of concern for peace and security. While algorithm-driven social media do not seem to be a primary driver of polarization at the country level, they could be a useful intervention point in polarized societies. This paper examines algorithmic depolarization interventions aimed at transforming conflict: not suppressing or eliminating conflict, but making it more constructive. Algorithmic intervention is considered at three stages: what content is available (moderation), how content is selected and personalized (ranking), and content presentation and controls (user interface). Empirical studies of online conflict suggest that not only could the exposure-diversity intervention proposed as an antidote to ‘filter bubbles’ be improved: under some conditions, it can even worsen polarization. Using civility metrics in conjunction with diversity in content selection may be more effective. However, diversity-based interventions have not been tested at scale, and may not work in the diverse and dynamic contexts of real platforms. Instead, intervening in platform polarization dynamics will likely require continuous monitoring of polarization metrics, such as the widely used ‘feeling thermometer’. These metrics can be used to evaluate product features, and can potentially be engineered as algorithmic objectives. While using any metric as an optimization target may have harmful consequences, to prevent optimization processes from creating conflict as a side effect it may prove necessary to include polarization measures in the objective function of recommender algorithms.