it is essential to emphasize that imperialist aggression does not punish ruling elites, but instead falls directly on the popular sectors. Blockades, sanctions, military intimidation, and financial suffocation are not “surgical” tools: they are mechanisms of economic warfare that seek to break the resistance of an entire people, discipline them, and force them to accept a subordinating order.
A recent and striking example of this occurred in Ukraine
In the face of imperialism neutrality is not possible. Either you are on the side of domination, plunder, and war, or you are on the side of the oppressed.
YOU ARE EITHER WITH US OR YOU ARE WITH THE STATE TERRORISTS
Iceland has experienced record heat this year. Glaciers have been collapsing and fish from warmer, southern climes, such as mackerel, have been found in the country’s waters.
This is followed by more gifts: baskets and bread trays made of strips of woven ivory, as well as flower garlands and an additional pair of gold and silver perfume flasks.
After these gifts, there are more performances, including from naked female acrobats:
who did tumbling tricks among swords and blew fire from their mouths.
Techno-libertarians are seizing power in the US, intent on collapsing government and liberating society from work, taxes and elections – and human “inefficiencies” – but there may still be hope for Australia
A cohort of software engineers in Silicon Valley dreamt of the possibility of digital cash. This electronic currency would exist only on the internet and would be encrypted and anonymously exchanged without a centralised authority (a bank), untraceable to any individual, and, to them most importantly, outside the reach of regulation. It would defy inflation through enforced scarcity; a perfect currency. This project was important to them for two reasons. One: no government would be able to tax this online money, as no one would know to whom it belonged – it was stateless, borderless, globally available to anyone. Two: it would be the ideal way to pay bounties in the service of their true dream – an assassination market.
An assassination market would work like this: a bounty would be publicly placed on the head of any government official who the market decided should be murdered. No one would ever know who the members of this market were, as encryption would keep them anonymous. Someone would name the target they desired to see killed. Someone else would reply with where and exactly when they would be assassinated, both parties identifiable only by a cypher specific to them. Others who also wished to see the person killed would anonymously pledge their untraceable money to the death pool. This might total millions of dollars, for the assassination of, say, a president or a chief executive. When the murder later happened exactly as the anonymous volunteer had predicted, they would then be able to cryptographically prove themselves the contract organiser and collect their bounty of digital cash without anyone ever knowing who they were.
If this sounds like the plot of a Neal Stephenson novel, it isn’t. It was 1997, and the person who claimed this idea as theirs was Jim Bell. He and his thesis, “Assassination Politics”, were the subject of glowing profiles at the time in the likes of Wired magazine. Before becoming a crypto-anarchist, Bell graduated from MIT and worked as an electrical engineer at Intel in the early 1980s. He was also jailed for tax fraud in the late ’90s and served 11 months in US federal prison. Released and shortly after re-arrested and jailed again – for stalking federal tax agents – Bell was given a 10-year sentence, broke parole on early release and was imprisoned a third time, until 2012.
Bell was a prolific contributor to the “Cypherpunks” mailing list, a legendary email digest in Silicon Valley circles in the early ’90s and eventually a vastly influential one. He posted there about his ideas, which became “Assassination Politics”, as well as his deep hatred of government and evangelism for privacy via encryption. He shared with most members of the list a fascination for bringing about a future where artificially intelligent machines or software agents would be able to do the work of human beings. He also had an interest in cryonics, a passion of many cypherpunks hoping to be unfrozen and resurrected in that glorious post-human future.
Some other historically notable contributors to the list include Julian Assange – for whom its anarchist conversations inspired his creation of WikiLeaks – and the pseudonymous creator of the cryptocurrency bitcoin, Satoshi Nakomoto. These were people – and they were almost exclusively men – who banded with a shared mentality, antisocial hackers who wanted absolute privacy for themselves (just let us buy our drugs, sex and murder hits in private, thanks) but none for anyone whom they deemed to be their enemies (normies, the government). They were techno-libertarians. Or anarcho-capitalists. Or, more accurately, accelerationists, working to push our current social orders to collapse. (More on which later, unfortunately.)
Reasonable people might (and did) object to the idea of an assassination market, even if it only ever existed as a concept. Positions such as “murder is wrong” were given short shrift by the market’s defenders. If any government actor were to behave corruptly in the eyes of the market, they would always know that they could be killed at any time: murder as deterrent to malfeasance. In this way, its architects thought of the assassination market as perfectly moral. What constituted “corruption” to the cypherpunks was often undefined or vague, or it was conveniently in line with the desires of wealthy people who wished to not pay tax (taxes being the ultimate state violence committed against libertarians).
But the idea went further: it would be a deterrent not only to political corruption, but in effect to anyone who wanted to run for political office at all, despite how pure their motives might be to serve (as the argument went that all who seek power have the power to be made corrupt). It would therefore become a risk too great to take on any kind of notable position of power or public renown. How our society would function without leadership was usually left as a very large series of blanks to be filled. If it was addressed at all, it would sometimes fall back on the libertarian fantasy of a “self-organising society” that would emerge unbidden. (The long and bloody history of cults suggests this often goes poorly.)
The cypherpunks’ ideas also share their DNA with the WikiLeaks doctrine of “radical transparency”, the position that all things done in the service of government should be done so in view of the public. There would therefore be no need for (murderous, corrupt) statecraft in a world where secrecy is impossible. War would be obsolete when every negotiation, every national need, was conducted in the open for the public. That was the idea, at least, in the most generous reading. Libertarians’ disdain for war is perhaps the only thing that could ever be said in defence of what, if you squint hard enough, might otherwise be described as their antisocial beliefs, though a cohesive belief system goes against their anarchist aspirations. (It’s profitable anarchism, mind you. Over the course of its operations, WikiLeaks raised funds in the form of bitcoin donations: in 2010 the organisation held 4000 BTC, which would be worth hundreds of millions of dollars today.)
“Radical transparency” would eventually land Assange in self-imposed exile in the Ecuadorian embassy in London for seven years, and in Belmarsh Prison for five, while being sought for extradition to the United States on 17 charges of espionage. WikiLeaks’ architects became players in the game they set out to destroy: outmanoeuvred by statecraft, in the crosshairs of the governments they sought to topple by exposing their inner machinations through publishing unredacted classified information.
This came with many unforeseen consequences, quite aside from Assange’s years spent living as a stateless leader of a hacker collective trying to outrun various law enforcement agencies across several countries. (Assange eventually pleaded guilty to a US charge of conspiracy to obtain and disclose national defence information. He struck a deal on jail time already served in order to return to Australia, where he has been living mostly quietly since June 2024.) The number of people endangered as a result of their identities being revealed by WikiLeaks in lethal regimes such as Afghanistan’s is still unclear. But it was in line with the blunt rationale of hacker philosophy: if you’re doing something in any way “wrong”, and you are exposed for it, then only you are to blame for whatever consequences that might invite. There was no room for the complexity of the geopolitical world in this way of thinking.
This sentiment was infamously echoed by Google’s then chief executive Eric Schmidt, when in 2009 he was asked to comment on people’s increasing anxieties about the lack of privacy on the rapidly evolving social internet: “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.” Sounds good! If there’s one thing that is great for social cohesion, it is the need for self-censoring out of fear of being exposed for doing something “wrong”. And if our every text message, email, voice recording, internet search and reading list were ever revealed en masse to the world in some catastrophic data breach? Surely that would also be fine. A corporate nightmare panopticon of surveillance has since come to be normalised in many spheres of employment, where managers embed tracker software in their workers’ laptops to count their number of keystrokes per minute, or watch them through their webcam. Tracking efficiency to the keystroke and then firing people based on perceived underperformance is something not even infamous union-buster Henry Ford could ever have dreamt of, and it has only come to exist in a world where our notions of digital privacy have been so comprehensively eroded.
It is worth revisiting this history of Silicon Valley political projects in the wake of Elon Musk’s attempts to attack the US government apparatus via his Department of Government Efficiency (DOGE, named for a no-value cryptocurrency) in the second Trump administration. Nearly 300,000 public servants were fired, leaving shortages of qualified staff in the departments and agencies of education, homeland security, energy, defence, environmental protection and other critical services. It is not clear what measure of efficiency these mass firings achieved in monetary terms, or how government funds being “saved” will instead be spent elsewhere. (Musk’s Starlink satellite internet company, though, did receive a no-doubt lucrative new government contract in February to upgrade the Federal Aviation Administration’s IT infrastructure.)
In the misanthropic, maladroit logic of techno-libertarianism, this kind of brute consequence is idealised. Efficiency is the only measure by which to judge society – all parts of it – as “successful” or not. The imperfect human mess of participatory democracy has no place in the eyes of people who perceive the world and everyone in it only as lines of code – markets, algorithms, machine logic. How much more simple it is, how much more efficient, this position argues, for all things to balance as equations – either correct or incorrect, with no anxiety-provoking uncertainty in between.
The DOGE sackings were perceived by many observers as a head-scratching, hard shift to the right, where a tech billionaire oligarch was given 90 days, unelected, to gut government departments. However, there is a very clear line to be drawn back to its Silicon Valley roots, which were always radical and reactionary. Musk might have once done a good job at dressing his various enterprises in the guise of green-energy evangelism when it resulted in him securing tens of billions of dollars in government subsidies, but his true politics have always been visible to anyone aware that one of his original business partners in PayPal (from where Musk was fired) was the ultra-right fellow tech billionaire, Peter Thiel. PayPal’s initial aim was to create a new internet currency to replace traditional forms of money. Ultimately, the company sold a large stake to the investment corporation BlackRock, infamous for its investment in fossil fuels. Thiel’s surveillance technology company, Palantir, holds AI-enabled contracts with the US Department of Defense, the Department of Homeland Security and the US Army, as well as licensing its software to the FBI, CIA and the National Security Agency. Its contract with the US Army – to apply AI and data solutions to battlefield logistics – is worth US$10 billion alone.
There is a long tradition of extreme separatist politics in the United States. That can take the form of “sovereign citizens”, who refuse to recognise the rule of law, or white supremacist terrorists seeking to violently establish ethnostates. (The incoherent beliefs of “sovereign citizens” have increasingly seeped into Australia’s fringes through online spaces, and are connected to the recent police murders in regional Victoria.) The internet itself has always been seen by the cypherpunks as the ultimate path of secession from society, which for them is the “meatspace” where all human endeavours ultimately come to failure or at best function “suboptimally”. Machine thinking – and the desire to become machines themselves in the form of superintelligent, post-human cyborgs – is fervently discussed among cypherpunks. As all problems in the world are the result of human failings, they argue, why not simply replace us all with machines that would never be “irrational”? Lacking passions, machines would only act on logic. It is a deeply antisocial, nihilistic and, ultimately, anti-human ethos. It holds that, for example, writing code to better identify people to be killed in battle is absolutely fine, as long as it pays back in the billions.
The accelerationists’ disdain for working particulars on how machines might replace our political and social systems plagues their plans for disruption. Take bitcoin, for example, which has not only failed to be anonymous, but has left its users so readily identifiable that it led to the three largest-ever seizures of assets of crime in US history. Ross Ulbricht, also known as Dread Pirate Roberts, was one of those targeted, for his running of the semi-encrypted online black market, Silk Road. He was pardoned by Donald Trump in January after serving 10 years for conspiracy to commit drug trafficking, money laundering and computer hacking.
There may be much agreement among non-reactionary citizens that current systems – democracy as hollowed out by the forces of capitalism – have alienated us so completely from a sense of political agency that disengagement is the only way to cope. When roughly half of young people in Australia are opting out of starting a family as they cannot afford to, and those pressures are twinned with a worsening climate crisis, they might well look desperately to anything that promises to reverse those trends. If that means gutting government completely, maybe that is the answer from a vantage of desperation, when the simple right to have a stably employed existence has become luxurious to so many people.
Musk and Assange are both accelerationists in their politics. Accelerationism argues that the system is broken, and the only way to change it is to turbo-drive it to its own inevitable destruction. The answer to the end of capitalism is more capitalism until we reach collapse (with those at the top gathering as much profit as they can along the way to prepare themselves for the new world). This is where those who desire the end of capitalism diverge from the wants of techno-libertarians. Left accelerationism would say there is a better world than this, one rooted in sustainability and divestment from capitalism’s destructive perpetual growth. Right accelerationism would say, as the cypherpunks are fond of: Fuck you, I got mine. When the regimes fall, work it out for yourself. Buy yourself multiple citizenships in as many countries as possible while you still can. Purchase and fortify an island. Live underground in a repurposed luxury nuclear bomb shelter patrolled by your private army. There will be no taxes! We’ll all have bitcoin! (In these post-society visions, there is always infrastructure for running cryptocurrency, but not for, say, food production.)
The most reactionary of right accelerationist techno-libertarians do not want to live in democracy, they want to live in their own micronations, run like corporations with a chief executive as dictator; somewhere between a sovereign monarchy and McKinsey as run by Hal from 2001. Peter Thiel is so in favour of these ideas that he consults a “personal philosopher” about how to employ them – a man called Curtis Yarvin, who formerly wrote long cypherpunk-inspired blog posts under the pen name “Mencius Moldbug”. Within these, Yarvin proposed replacing democracy with a form of corporate monarchism (RAGE – “Retire All Government Employees” – was his first order, which Musk later reconfigured in practice as DOGE), where there would be no voting, but instead people simply had the right to “exit” if they got tired of living in their fortified township being besieged by raiders from the outside world. These ideas – to gut government and put business executives in charge of every part of society – have gone well beyond their birthing on the nascent internet: they are affecting government policy of a world superpower. Politico reported that Yarvin attended a Trump inaugural gala in January as an “informal guest of honor”.
The fetishisation of efficiency in techno-libertarianism is difficult to disentangle from American capitalism: all things – but especially technology – must move ever forward, as “progress” is as inherently good and virtuous. Within Silicon Valley, technology is often said to simply “want what it wants”, ascribing a quasi-mystical power to what are no more than the byproducts of decisions made by human beings and the products created by them, designed to extract maximum profits. Technology in this way is thought of as manifestly unstoppable, always correct and the means of human liberation, particularly from the drudgeries of work. Labour will cease for people and will instead be undertaken by intelligent machines (this brings to mind a recent viral piece of internet commentary: “can we get some a.i. to pick plastic out of the ocean or do all the robots need to be screenwriters?”). The vision is a hodgepodge of self-appointed chief-executive god-kings punitively controlling a populace too stupid to govern itself, while – don’t worry – everyone lives in infinite abundance made possible by “artificial general intelligence” machines with intellects vastly superior to our own.
This technocratic undergirding of Silicon Valley capitalism also neatly puts anyone who might criticise it firmly in the Luddite camp: backwards, against progress, unenlightened. It might be pertinent to remember here that the original Luddites of the 19th century were textile workers in England who objected to the automation of their work by machines and so, the elimination of their jobs. They were also concerned that the lack of human oversight would result in inferior goods, or “slop”, as many are calling the bloodless, often inaccurate AI-generated writings and imagery of ChatGPT and its clones.
In a recent interview, British documentary filmmaker Adam Curtis (All Watched Over by Machines of Loving Grace) likened today’s AI to a ghost, trapping culture in our collective past. That AI hoovers up and spits back out our collective cultural memory, made up of all ideas, thoughts and feelings committed to print, film, television, music and art that it can find, means that nothing it “creates” is ever new, Curtis said. The much-vaunted “future” is just a regurgitation of what has gone before.
AI is trained on our emails, text messages, photographs and drawings. It is an infinite remix, a copy of a copy of a copy of a copy, which degrades in accuracy and meaning with each iteration. For Curtis, a horror story about AI would not be about it assuming some godlike superintelligence and destroying humanity (a persistent fear of a subset of cypherpunks who believe it has to be monitored closely – by them – to ensure it is “friendly” to humans), but rather about what it has already destroyed: that we are being haunted by contextless relics of ourselves. All our innermost thoughts and desires have become fodder for the machine, for its owners to turn into profit.
It is right to be horrified by this external stagnation dressed up as innovation, and more so as it is in the hands of people with such contempt for creative arts and the humanities, that not only do they believe that algorithms can create artistic and intellectual masterpieces (a million monkeys typing), but that the “training data” used to achieve this should be taken without compensation in what amounts to a wholesale rejection of copyright law. For these tech engineers and chief executives, if something cannot be monetised, it is worthless, and hence it can be taken for free.
Sam Altman, chief executive of OpenAI (not an “open source” developer any longer, but rather a for-profit corporation valued at $500 billion), recently announced the company’s intention to produce a Hollywood film made largely by AI bots with minimal human oversight. The AI behind the film Critterz will have been trained on every animated film so far laboriously created by human artists in an aim to make films “faster and more cheaply than Hollywood”. That this “efficiency” will only be possible because of the intricate creative labour of the tens of thousands of people who have brought the medium’s earlier visions to life appears entirely lost on shareholders.
This infinite regression, a hall of mirrors of the past, fits neatly within Silicon Valley’s retrograde politics: fascism is not new. Monarchy is not new. Busting unions is not new. Business owning the means of production is not new. Prioritising profits over people is not new. Rather, Silicon Valley’s biggest corporations have done an exceptionally good job with their public relations, convincing people that their ability to “connect the world” by posting on Facebook empowered them in ways never before seen in democracy. That every time you got into some pointless argument with a stranger on the internet, you were actually contributing to a heretofore unrealised future of human potential. To return to Adam Curtis, a soothsayer of sociological phenomena, this illusion of personal agency is false. In The Century of the Self (2002), a comprehensive history of the birth of public relations, Curtis explains the psychology employed by Edward Bernays, nephew of Sigmund Freud and largely regarded as the godfather of advertising: it was about crafting messages that turned our consumer culture from “needs-based” to one based on desires we previously didn’t have. And while consumerism was sold to us as the ultimate personal agency and path to freedom, instead we became trapped in our unending striving for material satisfaction. This world we now live in, where everyone is the “main character” of their lives, preoccupied with our own status, has in fact shrunk the power of the individual, not expanded it. And it has distracted us from collective action against bad governance, whether civic or corporate.
We don’t need to check Instagram or Facebook, or refresh Twitter every two minutes. None of these things are needs. In fact, there is copious peer-reviewed evidence that it would be better for our ruined attention spans if we didn’t; better for our mental health, our relationships, our democracy. We don’t need to upgrade our smartphones every 18 months, consigning the previous ones to scrap heaps that end up polluting developing nations with towering mountains of hazardous e-waste garbage, to be picked over for recoverable parts for pennies. But Apple’s latest advertising may make you feel like your life really will be better with that marginally more powerful new camera, while we run down the Earth’s supply of indium (a byproduct of zinc mining) to make touchscreens for the phones upon which almost every one of us relies.
Many Australians working in the United States for the first time – often for tech corporations – are horrified to learn of the country’s labour law of “at-will” employment, in which employers can terminate the employment of any worker at any time, for any reason. Without compensation. You might turn up to work on a Monday to find the doors of your worksite locked, the business having become insolvent or sold to someone else who no longer needs your services. You might find your job replaced by an AI agent. And the fact that you no longer have a job – or any of the benefits it bestowed, such as health insurance or a visa conditional on being employed by said company – is not your now ex-employer’s problem, as at-will employment law protects corporations, not people.
At the last Australian federal election, the Liberal Party’s flirtation with the introduction of US-style “efficiency” mandates, such as forcing remote employees to return to the office to better monitor their productivity and promising to gut the public sector, were met with the most comprehensive demolishing of the party since 1943. Equally disastrous were Peter Dutton’s attempts to sell the country on adopting nuclear energy, the prohibition of which is currently mandated at both state and federal level. The ban is a long-held, firm rebuke of nuclear’s many risks for catastrophic failure (Fukushima, Chernobyl, Three Mile Island) and future environmental and security risks posed by the inability to store nuclear waste safely (and a predilection for attempting to do so on unceded Aboriginal lands).
Nuclear energy is a pet lobbying project of Silicon Valley, where it is becoming increasingly clear that the enormous energy consumption demanded by ever-expanding data centres is straining the country’s electricity grid beyond its capacity. The growth of data centres is to service the multiplying demands of the computing power of AI agents, as well as the data demands of every new device we connect to “the cloud” – every smartphone, smart watch, smart light, fridge, car navigation system, streaming service, voice-assist agent. All of these are pulling down hard on the world’s electricity supply. The International Energy Agency projects that, by 2030, almost 3 per cent of all global electricity consumption will be used by data centres.
To not be able to power this infrastructure is an existential threat to Silicon Valley’s ability to function – as well as to our own ability to continue to live lives of constant convenience abetted by technology we now believe we cannot do without. In March, a lobbying group including Amazon, Google, Meta and 14 major global banks pledged support for the goal of tripling the world’s nuclear energy capacity by 2050, putting pressure on governments to adopt the technology, despite renewable energy, such as solar power, now being the cheapest source of electricity generation. Imagine having to say to the children wandering the wastelands of a post-nuclear apocalypse future, “Yes darling, we did this so that people in the past could ask a computer to make an image of a buffalo riding a tiny bicycle.”
Yet, there is hope of avoiding this enshittified future. When push came to shove, Australians roundly rejected the Liberal Party’s attempts to import these ruinous ideas from America; Peter Dutton’s cockeyed swing at bringing Trump-lite policies to Australia blew up as spectacularly as Donald Trump’s short-lived bromance with Elon Musk. (Disagreeing with Trump’s “big beautiful tax bill” resulted in the South African–born Musk being threatened by the US president looking into deporting him, as DOGE wound up most of its operations in July. Musk’s promised government savings of US$2 trillion were estimated to be closer to US$200.6 billion, with that figure itself being widely disputed as inflated.) In contrast, Prime Minister Anthony Albanese brandished his Medicare card during his election night victory speech, reassuring that no one in Australia will be bankrupted by falling sick or being injured, as is normal in the United States.
It was also hopeful that young Australian men bucked the global trend of a sharp swing to the right by their cohort elsewhere, by voting for the more progressive parties, singling out the environment and the right to affordable rent as key reasons for their decision to vote either Greens or Labor. Traditionally, it has been young men who have been the targets of the misanthropic, misogynist sphere of the internet so beloved of tech bros, crypto grifters and all the other bastard children of the cypherpunks. Maybe their ethos of every man for themselves has begun to wear thin for the men who are now growing up two generations removed from the authors of a mailing list. Silicon Valley bros might not lead to Silicon Valley blokes.
Perhaps it will be software engineers who will be the first people to lose their livelihoods to AI, once they deploy it to create the first software able to write perfect code – no human oversight necessary. Most recent data states that more than 26 million people across the world are employed as software engineers. The logic of techno-determinism would dictate that when faced with professional extinction, such workers should “adapt or die”. Their no longer fulfilling a role in their profession is proof of their lack of utility; therefore, the mass unemployment of 26 million people without new jobs to go to is justified. The system would be working exactly as it was designed to: achieving maximum fiscal efficiency.
A rough calculation would put wage savings for making software engineers redundant globally at over US$3 trillion per year. Profits would, of course, go straight to the top of the companies’ executive suites, not to the workers who came up with the code. The corporations no longer employing them will have reaped rewards, capitalising on the increased sharemarket value in no longer having to pay such wages, while retaining a software product they can sell to do the work of no-longer-needed people. That people without jobs have no money to engage in capitalism with is a problem for the genius chief executives of the future to figure out, along with what to do with their products that no one can afford to buy.
And those people who coded themselves out of a job? That would be their own bootstraps they’d have to pull themselves up by.
Shruthi introduced me to the concept of neti, basically, “not that.” It could be a slogan for India’s third way—unlike America, unlike China, not this, not that. I’m rooting for them. Unlike Europe’s third way, which seems to be to regulate, India’s third way wants to build in public. Halfway through the trip, we had a wonderful conversation with Pramod Varma, the architect of India’s Unified Payments Interface (UPI). It would be no exaggeration to say that UPI, which processes nearly half of the world’s real-time digital payments, has done more for financial inclusion than any technology in history. Most of us, from the West, pressed him late into the night on why the private sector couldn’t do what UPI did. Maybe there are just idiosyncratic things about India, and maybe the private sector could’ve done it. But the fact is UPI is a huge success, which, because I didn’t learn how to replicate it, I can only credit to the architect’s boundless optimism.
It’s a trend that more or less says, newer AI can somewhat reliably do harder and more useful tasks, as measured by how long it would take humans to do the tasks. As of this writing, the best AI can, roughly half the time, do tasks that take humans over four hours; that number was nine minutes just two years ago. Extrapolating the trend, AI will soon do tasks that take humans weeks. People use the trend to justify why the world might change very soon, very suddenly.
treated the prompt more like ‘what surprised me in 2025 that probably shouldn't have’ and wrote about F1 going to Apple, and the rapid rise of prediction markets.”
As for one of the recurring themes, Robert Cruickshank says, “I find it very telling how many people quoted here are surprised at institutional enabling of Trump’s Nazi authoritarianism. They lulled people into a false sense of security last year claiming the guardrails would hold. And these people generally have done little to maintain the guardrails!”
Shevek switched off the latest full-spectrum quantum computer with a smile. " Some problems are too non-trivial and NP hard for giant data-centers and supercomputers. I've put them all in the little quince "
Counterfactual quantum communication (CQC) is an intriguing paradigm originating from quantum mechanics, enabling spatially separated parties to achieve communication tasks without the need to transmit any physical particles across the channel. Conventional quantum communication typically relies on particle transmission or utilizes entanglement-assisted protocols with local operations and classical communication, such as quantum teleportation and superdense coding, to transfer information. As the research area of quantum communication is being rapidly developed, significant progress has been made in the development of CQC. In this paper, we present a comprehensive tutorial on CQC for transmitting both classical and quantum information, noting that no physical particles are found in the channel during successful information transmission. We begin by studying the origin of CQC, followed by a detailed examination of counterfactual protocols for classical and quantum information transmission. This paper highlights the applications of CQC and outlines future research directions