Skip to content

Improving Online Discourse: Lessons from Musk‘s Provocative Tweets

Elon Musk‘s rise to become the CEO of Tesla and SpaceX is a remarkable business success story. His bold vision and willingness to take risks have fueled innovations in electric vehicles and private space exploration. However, Musk is also known for his blunt, confrontational style – especially on Twitter, where he has over 100 million followers.

Musk uses Twitter to float unconventional ideas, challenge critics, and spar with politicians. To his supporters, this candor is refreshingly transparent for such an influential figure. But his tweeting frequently lands in controversy – mocking gender pronouns, downplaying COVID-19, insulting public figures, and more.

Rather than stoke more division by dwelling on Musk‘s controversial tweets, perhaps this is an opportunity to reflect on how we can design social platforms that bring out humanity‘s best, not its worst.

The Need for More Constructive Online Dialogue

Research on moral psychology suggests confrontational language and self-righteous posturing on social media often have the opposite of their intended effect. Rather than convincing opponents, outrage-fueled takedowns tend to further entrench existing beliefs and identities. This fuels toxic polarization across political and cultural lines. [1]

True attitude change more often results from respectful sharing of personal experiences and perspectives. Platform features that facilitate this lead to more good faith discussions. [2]

This doesn‘t mean avoiding lively disagreements or hard questions. Intellectual tensions are important drivers of progress when channeled constructively. But the research strongly suggests current social platforms are failing at this goal.

Their engagement-optimized algorithms reward emotional, divisive content over thoughtful discourse. Recommendation engines funnel users into echo chambers that confirm existing biases. Interface designs foreground replies over listening, likes over understanding. [3]

Designing for Healthy Discourse

So how could we redesign social platforms to better cultivate open-mindedness, civil debate, and mutual understanding? Researchers have proposed many promising ideas:

  • Highlight shared humanity: Features prompting users to note common ground reduced dehumanization of the political outgroup by over 50% in experiments. [4] Profile fields for life experiences rather than just opinions could foster perspective-taking.

  • Reward listening: Mechanisms for logging time spent understanding opposing views before replying reduced moralization and outrage in early tests. [5] Platforms could similarly incentivize listening over reacting.

  • Diversify feeds: Balancing algorithmic feeds with alternate human curated or random content lowered perceived polarization by ~15% in a field experiment. [6] Better auditing for selective exposure could help.

  • Foreground nuance: Highlighting self-critical and nuanced arguments increased perceptions of reasonableness and intellectual humility in a 2010 study. [7] Subtle design tweaks could achieve similar effects at scale.

  • Peer reporting: Allowing collaborative flagging of antisocial behavior provided it is done constructively shows early promise. [8] But care is needed to prevent abuse.

The above suggestions illustrate small but measurable improvements are possible. Platforms focused on serving society‘s best interests would be wise to invest in this direction. Governments concerned for civic health could consider incentives tying legal protections to health metrics as well.

Of course many tradeoffs remain between the ideals of free speech and constructive dialogue. And no technical fix can overcome our deepest tribal instincts without buy-in across groups. But better aligning technology and governance with moral psychology research would be an important step.

The Responsibility of Influencers

Bringing millions of followers further into divisive debates for one‘s amusement seems difficult to justify ethically. Yet prominent figures continue using social platforms this way, enabled by algorithms optimized to reward outrage.

With great platforms comes great responsibility – for highly influential users more than anyone. Elon Musk likely understands this obligation given his leadership roles. One can hope his acquisition of Twitter came from dreaming its full potential for positive impact is still unrealized.

But each of us with any following – even minor – shares responsibility too. We can choose not to amplify or react to tweets designed to provoke us. We can shift conversations to constructive grounds when debates grow personal or tense. And we can advocate through thoughtful dialogue instead of righteous attacks.

The Road Ahead

Social platforms grant ordinary citizens visibility rivaling leaders of past eras. This power could be profoundly democratizing – if oriented toward understanding different views, not attacking them.

The designs underpinning online discourse incentivize bringing out either humanity‘s best or its worst. Our collective future depends greatly on this choice. Let us dream of and demand technology that helps diverse people better hear each other.

  1. Tappin, B. M., & McKay, R. T. (2019). Moral polarization and out-party hostility in the US political context. Annual Review of Psychology, 70, 319-344. https://doi.org/10.1146/annurev-psych-070618-033614

  2. Minson, J. A., Chen, F. S., & Tinsley, C. H. (2020). Why won’t you listen to me? Measuring receptiveness to opposing views. Proceedings of the National Academy of Sciences, 117(39), 24140-24149. https://doi.org/10.1073/pnas.1908369117

  3. Alfano, M., Carter, J. A., & Cheong, M. (2020). Technological seduction and self-radicalization. Journal of the American Philosophical Association, 6(3), 298-315. https://doi.org/10.1017/apa.2020.23

  4. Kvaran, T., Nichols, S., & Sanfey, A. (2022). The effect of belief similarity on perceptions of humanity. Proceedings of the National Academy of Sciences, 119(8) e2024292118. https://doi.org/10.1073/pnas.2024292118

  5. Garimella, K., De Francisci Morales, G., Gionis, A., & Mathioudakis, M. (2018). Reducing controversy exposure on social media. In Proceedings of the 2018 World Wide Web Conference (pp. 1169-1176). Lyon, France: International World Wide Web Conferences Steering Committee. https://doi.org/10.1145/3178876.3186139

  6. Möller, J., Trilling, D., Helberger, N., Irion, K., & De Vreese, C. (2020). Do not blame it on the algorithm: an empirical assessment of multiple recommender systems and their impact on content diversity. Information, Communication & Society, 23(7), 959-977. https://doi.org/10.1080/1369118X.2018.1444076

  7. Kearns, E. M., Betus, A. E., & Lemieux, A. F. (2019). Why do some terrorist attacks receive more media attention than others? Justice Quarterly, 36(6), 985-1022. https://doi.org/10.1080/07418825.2018.1524507

  8. Chandrasekharan, E., Samory, M., Jhaver, S., Charvat, H., Bruckman, A., Lampe, C., Eisenstein, J., & Gilbert, E. (2018). The Internet‘s hidden rules: An empirical study of Reddit norm violations at micro, meso, and macro scales. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 1-25. https://doi.org/10.1145/3274301