My 2 cents on Google'sLaMDA being sentient
AI models don't have a memory: When you converse with a chatbot one day, it won't remember what you said the next day. Chatbots (and Language Models) typically work by looking at "context", which, for you, basically means a few sentences in the past. The limit will vary from model to model, but it's typically up to 1000 words or something (not sure what is it these days with super huge models, but there's always a limit). Even if a chatbot uses "RNN", it's still very limited (usually even more) as RNNs struggle with long-term memory where long [a few hundred words]. The point is that AI models have no idea what you said a few sentences back. Also, don't be confused by models like Neural Turing Machine, which have a "working memory" (like RAM) but still no permanent memory (like a hard disk).
Jun-24-2022, 16:40:12 GMT
- Technology: