electronic media artist
GETTING NOWHERE
(1981, 10:00’, SD video)
Cold War time conversation with ELIZA about the threats of a nuclear strike and its consequences to both humans and computers re-scanned live from an Apple II computer monitor. ELIZA was a 1960's computer program written at MIT by Joseph Weizenbaum - a parody of "the responses of a non-directional psychotherapist in an initial psychiatric interview."
The following is from a Jan 24, 2023 chat with chatgpt about this conversation from the past.
lx: «If you were ELIZA how would you have answered me then?»
chatgpt: «If I were ELIZA, a program written in the 1960s, my abilities and capabilities would be limited compared to the current state-of-the-art language models like me. Given that ELIZA was a simple program with a small set of predefined rules and responses, it would have been able to understand the concept of "nuclear strike" but it would not have had the ability to understand the full scope of consequences of a nuclear war on both humans and computers.
As ELIZA, my responses would have been limited to predefined patterns and keywords and would have been based on the script given to me by Joseph Weizenbaum. I would have been able to provide some basic information on the topic, but my understanding would have been limited and the conversation would have been quite basic.
It's important to note that ELIZA was not designed to provide accurate or complete information on complex topics like nuclear war. It was more of a demonstration of the potential of computer-human interaction.»
lx: «Would GPT-ZERO realize that the answers you just gave me were generated by a language model?»
chatgpt: «GPT-Zero is a model similar to mine but it was trained on a more recent data and with a different architecture. If GPT-Zero were to read the conversation we just had, it would be able to understand that the answers I provided were generated by a language model. It's important to remember that GPT-Zero and I are machine learning models, and while we are able to understand and generate text, we don't have consciousness or self-awareness.»