2025, the doomers are tilting closer and closer to a sort of fatalism. “We’ve run out of time” to implement sufficient technological safeguards, Soares said—the industry is simply moving too fast. All that’s left to do is raise the alarm. In April, several apocalypse-minded researchers published “AI 2027,” a lengthy and detailed hypothetical scenario for how AI models could become all-powerful by 2027 and, from there, extinguish humanity. “We’re two years away from something we could lose control over,” Max Tegmark, an MIT professor and the president of the Future of Life Institute, told me on X, and AI companies “still have no plan” to stop it from happening. His institute recently gave every frontier AI lab a “D” or “F” grade for their preparations for preventing the most existential threats posed by AI.
Since the demise of Ecofascism, Summit Hopping and Insurrectionism , Mutual Aid anarchs have been concerned their particular formalism or distribution or interpretation may flower and wilt as just another craze on the fringe of democratic socialism.
It seems they are right to worry. Self-immolation beckons. The only real way to prove your deep sincerity and total commitment . Unless you just set fire to just one limb that is.
No comments:
Post a Comment