Looking back, I just thought of something. AI doomers have reasons to poison the well of global free information. If they assume that AIs will scrape and learn from all the publicly available data, they might be less dangerous if some large fraction of that data was counterfactual.
Looking back, I just thought of something. AI doomers have reasons to poison the well of global free information. If they assume that AIs will scrape and learn from all the publicly available data, they might be less dangerous if some large fraction of that data was counterfactual.
Eric Hoel's substack of May 31 might be of interest, about AI risk. (I still see no other comments here?)