Three Lessons from the Social Media Era for the Age of AI
,We're thrilled to have another guest author on our blog: Dr. Gonca Gürsun, a seasoned computer scientist and Boston University alumni whose work focuses on behavior learning in AI systems. In this opinion piece, she explores what the social media era can teach us about incentives, mediation, and the societal consequences of deploying AI systems at scale.
For nearly two decades, social networks rewired society in ways we didn't anticipate. At first, we called it connection. Then we watched them reshape media, politics, and marketing, turning attention into the world's most valuable currency. We thought we were building digital town squares, but ended up with polarizing engagement machines.
Now we are entering another transformation with artificial intelligence. This time faster, deeper, and potentially more consequential. AI is moving from being an assistant tool on the side to becoming the interface through which people work, learn, and decide. It is not a feed we scroll through. It is a system that answers us, collaborates with us, and increasingly acts on our behalf. And if the social media era taught us anything, it is that the biggest impacts of a technology are rarely the ones in its marketing. They are the ones produced by its incentives.
The AI era feels different: more productive, more personal, even magical. But it is precisely this "helpful" quality that makes the analogy to social networks so urgent. Social platforms did not conquer public life through coercion. They did it by being convenient. AI will do the same. The difference this time is that we are not entering unprepared. Two decades of experience have already taught us valuable lessons on how powerful systems behave when incentives go unchecked.
Lesson #1: Incentives quietly design the future
Social networks did not polarize society by accident. They optimized for a critical metric: engagement. Engagement rewarded outrage, tribal identity, and emotionally activating content. The algorithm did not "prefer" extremism in a moral sense; it simply learned what made users pause, react, and share. This should have been predictable – it is the core design of the attention economy: whatever keeps people in becomes what gets amplified.
Now consider AI. These systems are trained and tuned to maximize performance under metrics: user satisfaction, responsiveness, perceived helpfulness, time saved, subscription retention. None of these goals is inherently bad. But they create gravity. If AI is rewarded for being convincing rather than correct, soothing rather than honest, or persuasive rather than empowering, it will drift toward what the metric wants.
Taken together, the pattern is hard to miss. Whether in social networks or in AI systems, what ultimately shapes outcomes is not what platforms promise, but what they measure and reward. Optimization quietly does the rest. When incentives are misaligned, systems drift predictably, incrementally, and at scale toward futures no one explicitly chose. In that sense, incentives are not a technical detail. They are the architecture of the future being built.
Lesson #2: Invisible mediation fractures reality
Before social media, the public conversation relied on shared institutions. These institutions made mistakes, had biases, and sometimes failed catastrophically. But they created something essential: a shared reference point. Social networks broke this model. Suddenly, everyone had a publishing platform. Information became abundant, but attention became scarce. Algorithms stepped in to decide what people would see. The cost of distribution fell to nearly zero, while the value of visibility exploded. "Truth" became less important than reach.
The consequence was not simply misinformation. It was epistemic fragmentation: different groups living in different realities. People did not merely disagree on opinions; they began to disagree on facts.
AI risks taking this dynamic further since it does not only filter what we see – it shapes what we understand. Language models will increasingly summarize the news we don't read, interpret events we don't have time to study, explain science we don't understand, and rewrite arguments we cannot articulate. They do not just filter information; they produce it in the most trusted form humans have: natural language that sounds coherent, calm, and authoritative.
In the social media era, the interface is the feed. In the AI era, the interface is the answer. That difference matters. A feed offers many voices competing. An AI assistant offers one voice that synthesizes, and that synthesis can subtly frame meaning. Two people may read the same event and ask their AI: "What happened?" They may receive two different stories – different emphasis, moral framing, causal explanations. Not because of explicit manipulation, but because personalization and optimization shape responses. We could soon live not only in information bubbles, but in explanation bubbles where reality is narrated differently depending on user profile and platform incentives.
As interpretation shifts from shared institutions to invisible, personalized systems, the common reference points that once anchored public understanding begin to dissolve. No single actor needs to manipulate outcomes; quiet optimization and individualized framing are enough. When interpretation itself becomes automated, reality does not collapse all at once – it fractures. What was once a technical layer becomes the infrastructure through which shared reality is formed or lost.
Lesson #3: Convenience scales faster than wisdom
Social networks expanded globally in a few short years. But social norms, literacy, and regulation took more than a decade to respond. And even now, the conversation remains unsettled. We did not enter this era with a shared understanding of what it meant to broadcast to the world, to live with permanent digital records, to measure popularity numerically, or to consume endless algorithmic content. We learned by getting burned.
The same pattern is unfolding with AI, except faster. Schools are already grappling with AI-written homework. Companies are integrating AI into workflows. People are using language models for mental health advice, relationship dilemmas, and personal coaching. Governments are experimenting with AI for public services. Meanwhile, cultural norms for authorship, truth, consent, and accountability remain underdeveloped.
What does it mean to submit AI-assisted work? How do we credit authorship? What is plagiarism when text generation is ubiquitous? How do we preserve competence when automation absorbs the learning process? How do we protect privacy when conversation becomes a data stream?
We are adopting AI first and leaving the hard questions for later. Not because society does not care, but because convenience scales faster than wisdom.
People will use what saves time, reduces friction, and feels helpful–even when they don't fully understand the societal impact. The gap between capability and governance is where harm grows. Two decades of social platforms proved that. The AI era is poised to repeat it unless we design for literacy, norms, and guardrails from the start.
From Attention to Agency
If all of this feels familiar, that is because it is. The AI era is not a break from the social media era so much as its continuation at a deeper layer. The same forces are at work: incentives shaping outcomes, interpretation shaping understanding, and technology scaling faster than the norms meant to contain it. What changes is not the pattern, but the point of impact.
The lesson is not that technology is harmful, but that technology is never neutral. Systems amplify whatever they are designed to reward and whatever humans are most willing to outsource. AI can become the most empowering interface humanity has ever built. But if we treat it as just another product cycle, rather than a social transformation, we will repeat the last two decades.
Only this time, the stakes will not be attention. They will be agency.
Author: Dr. Gonca Gürsun | Key visual via Better Images Of AI
What are your thoughts on the topic? Drop us a line at innovation@dw.com
