Platforms like Grok, within Elon Musk's X, are engines designed to maximise engagement, frequently at the cost of truth and social cohesion. The central issue is power, accountability, and public welfare, not simply "free speech."
The Corporate Onslaught: Profit Over Protection
Generative AI is being deployed without sufficient safeguards, effectively using the public as test subjects for impactful technology. Risks such as AI-generated harassment, non-consensual imagery, radicalising content, and disinformation are often overlooked in favour of market share.
Presenting this issue as one of "user choice" individualises systemic problems and implies that vulnerable users have the same power and technical knowledge as large corporations.
The Essential Role of the State
The EU's AI Act is a necessary safeguard. By requiring transparency and strict risk classifications, it boosts corporate accountability. The UK's Online Safety Act establishes that platforms have a proactive duty of care to users, shifting legal responsibility from victims to the companies that design and profit from these systems.
The Musk Doctrine: Corporate Sovereignty vs. The People
Musk's approach to "free speech absolutism" selectively defends the speech of powerful actors and content that increases engagement. Threats to withdraw services from nations that pursue regulation reflect a belief that public policy should yield to private commercial interests.
A Progressive Path Forward
- Regulation should require proven safety measures before public deployment
- Public algorithms and democratic audits: a legal right to algorithmic transparency
- Defunding the attention economy through frameworks for public-interest algorithms
- Establishing enforceable collective digital rights
- Global solidarity against corporate power through unified democratic fronts
The choice is clear: design for people or profit. We should choose people.