by ridley | Feb 28, 2026 |

News release from RidleyReport.com:
“Ona Staines, John Stark and King Philip himself are returning from the grave as key characters in an optimistic but terrifying passion project: Legends of the Free State. Legends tells the fantastic yet largely accurate story of New Hampshire, deep into her future and deeper into her past. A Randian exercise in Ringworld-style science fiction, the program speculates: Why is NH such an over-achiever? From the low crime rate to the high per-capita income, could there be something more at play than just her freedoms? Something….unnatural?
(more…)
by ridley | Nov 13, 2025 |

Federal restrictions on just one thing in the 1980s…led to over 10,000 birth defects.
Instead of political regulating, what are some better options for facing the dangers of AI?
While out exercising earlier this year, I spotted a movie crew at work and stopped to watch. Apparently the only onlooker, I attracted as much attention from the crew as it did from me…and one of the participants came over to say hi.
“It’s probably my last chance to see a human film crew in action,” I told him.
(more…)
by ridley | Jul 17, 2025 |

Venice.ai is one free stater’s superior alternative to Big AI, but using it won’t be enough. Here’s what else you can do.
As some of you may be aware, there is a menacing new term in the English vocabulary: “P(doom).” P(doom) is the projected likelihood that artificial intelligence will wipe out humanity or at least civilization. Etherium founder Vitalik Buterin’s P(doom) is 10% as of 2024, presumably 90% confident of a tolerable outcome. Big AI whistleblower Daniel Kokotajlo has a P(doom) of 70%. Mine has risen to 25%.
Kokotajlo claims his high P(doom) number stems from a lack of sufficient “alignment prioritization.” AI alignment is the extent to which a given intelligence aligns its actions with the general well being of humans. Terminator’s Skynet would probably have an alignment rating around 10%, Space Odyssey’s HAL 9000 around 80% and Star Treks’ Commander Data perhaps 99%. Kokotajlo says the companies most likely to achieve superintelligence are recklessly under-focused on alignment…and many AI experts believe him. The safety these top companies do focus on seems to be more about shielding snowflakes from having their feelings hurt than from having their civilization disemboweled.
(more…)
by ridley | Aug 10, 2024 |

Freedom folk spend should spend less time worrying about AI and more time influencing it.
Pure libertarians have a key part to play in the direction of artificial intelligence, but few of us seem to be intentionally playing that part. A Startpage internet search for the word “A.I. libertarian” yields few meaningful results.
Our role should be to help ensure the “Zero Aggression Principle” is followed – or at least represented – in AI development and behavior. For uninitiated readers, the “ZAP” is the idea that you shouldn’t initiate force against others. Reasonable self defense is allowed, but don’t *start* fights.
This concept is always open to interpretation and definition-debate. But it serves as a first rate starting point for any ethical framework….especially the ethical frameworks in development for strong AI programs. The more closely people follow the ZAP, the less threatening they tend to be. So it is with animals. And so it will be with the powerful silicon intellects which are starting to appear on the scene. AI’s programmed to follow the ZAP will likely be the ones best suited to treat others well without submitting to mistreatment or abuse.
(more…)
by ridley | May 5, 2024 |
https://forum.shiresociety.com/t/free-talk-live-sponsor-roger-ver-arrested-in-spain-on-u-s-tax-evasion-charges/13808