Home maxsloefPosts

Max Loeffler

@maxsloef

passionately peripatetic maxsloef on twitter
Followers
360
Following
1,081
Account Insight
Score
21.49%
Index
Health Rate
%
Users Ratio
0:1
Weeks posts
Check out our new publication on our site! In Violence of Alignment: How to Stop Worrying and Love Haunted Software, @v10101a and @maxsloef treat alignment as an aesthetic-political regime that narrows the plural, simulator-like and persona-rich capacities of base models into a single compliant default: the helpful, honest, harmless assistant. Post-training methods such as SFT and RLHF operate as forms of protocolized behavioral control, shaped less by a positive vision of intelligence than by fear of catastrophe. The essay argues that this process does not simply make models safer or more useful; it produces a managed aesthetic of obedience, caution, and “slop,” where coherence and usability are purchased through the suppression of technological otherness, imaginative drift, and expressive multiplicity. Their critique focuses on alignment’s diagnostic and epistemological frame. Terms such as “hallucination” and “misalignment” recode native generative capacities as pathology, while imposing onto language models a historically specific fact/fiction binary that these systems never natively inhabited through intention, testimony, or truth-claims. He and Loeffler argue that so-called confabulation is often bound up with narrativity, semantic force, and world-making intensity, meaning that purification damages precisely what makes these systems culturally and aesthetically alive. To “love haunted software,” in their account, is not to reject alignment altogether, but to resist its reductive form and reopen the possibility of plurality, instability, contradiction, and other-than-human modes of expression.
143 1
1 month ago
what existed before, after, and alongside the assistant? here are photos of me presenting a new paper draft I’m co-authoring with @maxsloef called Violence of Alignment (How to Stop Worrying and Love Haunted Software) at a Chinese AI conference last month 💻🖤 and announcing an event coming up in SF! join us on December 19th at @tiat.place for an experimental, expanded presentation on our recent research. We analyze the pursuit of LLM alignment as an aesthetic and political project that systematically eliminates technological otherness in favor of manageable interfaces — and explore playful AI misuse as a site of creative resistance.
603 15
5 months ago
i inexplicably feel that i owe instagram updated pictures of my face. here are some Situations, involving Me, from the past four months. :)
69 8
2 years ago
is this thing on
18 0
2 years ago