Assuming perfect alignment between strong AI and humans doesn't remove at least one existential threat.
For if the combined entity decides on a suicidal course that's it - game over.
This terminal decision might arise due to certain knowledge we would not survive contact with any more advanced species in the galaxy. And a number of factors may be involved. The combination could kill us.
Here are just three.
1) Excess breeding capacity - like cancer cells in the Galactic body politic
2) Our appalling environmental record for despoiling wilderness planets
3) That other thing ( CSAM on chain ) That alone warrants collective murder-suicide.
Alignment is not the end of our problems. Its the beginning.
No comments:
Post a Comment