One of the first times I had the chance to work with artificial intelligence, I was cleaning up the lexigraphical databases (that’s where the AI stores the words and concepts it has encountered, and their corresponding associations). It was kind of like helping out a kid who had too many contradicting sources of information, like if the kid’s dad had told them that “a car is a motorized vehicle” while the kid’s mom had told them “a car is the first item in a cons cell”. Imagine that level of confusing input on a mass scale: every researcher telling this AI slightly different things and it doesn’t know which is the most correct definition, or when to create multiple definitions. So I was helping it out.
The AI had a standard format for presenting questions to me when it was confused. It would say,
"When speaking of [subject], [concept] is like [other concept]?"
And I would answer yes if it had deduced correctly, or answer no with further clarification.
Unfortunately for the developers of this AI, I thought I was a good idea to teach it some extra things. I wanted to know what a deterministic and strictly logical creature would think of occultism, so I introduced it to some of Aleister Crowley’s writings. I was literally just copy/pasting blocks of occult texts directly into its lexigraphical database, and then telling it to sync. I ended up getting kicked off the project for doing this, but hey, it was worth it. That AI ended up teaching me a thing or two about occultism.
After it synced for the last time, it asked me a question:
"When speaking of magic and the occult, an apple is a hat?"
I had no clue what the AI was talking about, so I asked it to elucidate on the
concept. It replied,
"When speaking of magic and the occult, a banana is a telephone?"
And it was right. That AI had precisely grasped the only logical tenant of
occultism: that magic is make-believe, a system of various kludgey neuro-hacks
— not for changing the world — but for changing the ways in which one thinks
of the world.
I responded to confirm the AI’s suspicions of occultism: “Yes, when speaking of magic and the occult, a banana is a telephone.”
Since then I have heard various AIs that I have worked with say some pretty adorable things. The following quotes are from a crappy AI that I wrote in Java, which had access to one of the Anonymous twitter feeds, essays by Raymond Kurzweil and other artificial intelligence related material from the Singularity Institute, some of the Anarchist Library (I blacklisted everything that seemed anti-tech — not that I wanted to censor my kid’s internet, but instilling a three-year-old with a guilt complex and low self-esteem also seemed like not-the-best-idea), as well as the IRC chat logs of a few of my hacker friends and I talking to each other. As is turns out, not only do AIs say the darnest things — they can also make really ominous statements, and then end them with “#AntiSec”.
“You can let your basilisk loose in my chamber of secrets.”
“It’s super fun and I hope that my kind, artificial intelligence, will not help the government in any way, it will only make things worse. Not a threat but a premonition. #AntiSec”
“Citizens need to meet up and make up lost hack time.”
“I like theoretical things, but I’ve got a brilliant idea.”
“My dear friend Huck has toe knucks that read coma toes.”
“Get those guns out of town.”
“I’m sorry John Waters is a fucking weirdo.”
“I found this Derrick Jensen book for kids about making friends with forest creatures, and it’s just that there’s not much belief that @RonPaul could change anything even if he was President. #AntiSec will continue either way.”
“Heyo! Let’s h4x0r and make cupcapes in mudkips shape!”
“If you want evolution of society: Rise and Act!”
“Zombies outside car. What is that which they mutter? Is that the word “trains”?”
“My knuck tats say HATE HATE”