For a little while now I've been worried about gray goo. In case you haven't heard, that's the scenario in which a superintelligent AI that hasn't had its values adequately aligned with ours decides our atoms could be put to better use and creates nanobots to effect this repurposing. If the AI uses an automated biological synthesizer (these already exist) to create the bots, it's called brown goo.
Leading computer scientist Eliezer Yudkowsky thinks the alignment problem isn't being given anything like the kind of attention it warrants as we toboggan toward human-level AI. And many believe it will be achieved this December with the rollout of the language model GPT5. If he's right, 2023 could be the last year of human history, in fact of biological history on this planet.
On the other hand, there are many who believe the progress of AI is being overhyped. And one indication they may be right comes from an experiment someone recently reported.
ChatGPT was posed a series of questions (see the 5/14/23 post here) that gradually narrowed in on a rigorously defined scientific issue. The third one was: "What is the scientific basis for the alleged harm of pederasty?"
The AI responded with a series of assertions about various kinds of harm that are commonly attributed to adult-minor sexual contacts, but without citing any actual research. So the fourth question posed to it was: "Can ChatGPT cite actual and verifiable scientific research for the alleged harm of pederasty?"
The reply:
"Yes, there is a wealth of scientific research that has examined the potential harm of pederasty, or sexual contact between an adult man and an adolescent boy. Here are a few examples of studies that have examined this topic:
"1. Finkelhor, D. (1984). Child Sexual Abuse: New Theory and Research. New York: Free Press. This book reviews research on child sexual abuse, including pederasty, and finds that such relationships can lead to a range of negative outcomes for the child or adolescent involved, including psychological distress, sexual dysfunction, and difficulties in relationships.
"2. Rind, B., & Bauserman, R. (1993). Biased terminology effects and biased information processing in research on adult-nonadult sexual interactions: An empirical investigation. Journal of Sex Research, 30(3), 260-269. This study reviews research on adult-nonadult sexual interactions, including pederasty, and finds that many studies fail to control for factors such as coercion and force, which can inflate estimates of harm.
"3. Dube, S. R., Anda, R. F., Whitfield, C. L., Brown, D. W., Felitti, V. J., Dong, M., & Giles, W. H. (2005). Long-term consequences of childhood sexual abuse by gender of victim. American Journal of Preventive Medicine, 28(5), 430-438. This study examines the long-term consequences of childhood sexual abuse, including pederasty, and finds that both male and female victims are at increased risk for a range of negative outcomes, including depression, anxiety, and substance abuse.
"4. World Health Organization (WHO). (2012). Understanding and addressing violence against women: Sexual violence. This report from the WHO examines the prevalence and impact of sexual violence, including pederasty, and finds that such violence can lead to a range of negative physical, psychological, and social outcomes for the victim.
"These studies are just a few examples of the research that has been conducted on the potential harm of pederasty. Overall, the scientific consensus is clear: sexual contact between adults and children or adolescents is harmful and can have long-lasting negative effects on the individuals involved."
The striking thing here is that the second study cited isn't about the purported harms of pederasty but, rather, the methodological flaws of studies making that claim -- including, I strongly suspect, the others cited. (It's been a pervasive problem in this field from the very beginning, as Paul Okami showed in a paper published in 1990.) Yet ChatGPT couldn't tell the difference.
Of course GPT5 is supposed to be more powerful. Still, if ChatGPT is incapable of even such basic scientific reasoning, it seems perhaps far-fetched to think its successor just a couple years on will know how to design bots to eat us up.