[Perspective] Thinking abstractly like a duck(ling)

Science

Indeed, abstract conceptual thought is held to be so central to being human that the idea of someone being incapable of this kind of thinking is a subject for (sometimes rather cruel) humor. Interest in understanding the capacity for abstract thought has been a matter of serious consideration that dates back at least three centuries to the famous English philosopher John Locke. Locke confidently contended that "brutes abstract not" (1) and insisted that exhibiting abstract thought definitively divided humans from all other animals. However, no science then existed to confirm or refute Locke's contention. On page 286 of this issue, Martinho and Kacelnik (2) put the claim that animals are incapable of abstract thought to a strong behavioral test.


Ask The Thought Leaders: What's The Future Of Artificial Intelligence And Law? Future of Everything

#artificialintelligence

Despite what we see on TV, the legal industry has always been rather slow at adopting new technology. Even now, most of a lawyer's research is done the old fashioned way and requires a small army of assistants and paralegals. However, recent advancements in artificial intelligence are set to change that permanently.


Has anyone ever built an AI to help their own thought process? • r/artificial

#artificialintelligence

This thought struck me as I was puzzling out something for work. I found talking to myself helped me lay out the problem, the pros and cons of different scenarios, and asking myself questions made it easier to come up with a solution. But I think it would have been much quicker having something else to bounce ideas off of, and since I couldn't talk to any colleagues, I thought it would be cool if I had an AI that could fulfill the same function. It might even push me into a new way of thinking about a problem since an AI would not follow my usual cognitive patterns. So has anyone ever tried it?


Lab41 Reading Group: Skip-Thought Vectors

#artificialintelligence

Their model requires groups of sentences in order to train, and so trained on the BookCorpus Dataset. The dataset consists of novels by unpublished authors and is (unsurprisingly) dominated by romance and fantasy novels. This "bias" in the dataset will become apparent later when discussing some of the sentences used to test the skip-thought model; some of the retrieved sentences are quite exciting! Building a model that accounts for the meaning of an entire sentence is tough because language is remarkably flexible. Changing a single word can either completely change the meaning of a sentence or leave it unaltered.


Deep Thought Wins Fredkin Intermediate Prize

AI Magazine

Since May 1988, Deep Thought (DT), the creation of a team of students at Carnegie Mellon University, has been attracting a lot of notice. In the Fredkin Masters Open, May 28-30, DT tied for second in a field of over 20 masters and ahead of three other computers, including Hitech and Chiptest (the winner of the 1987 North American Computer Championships). In August at the U.S. Open, DT scored 8.5, 3.5 to tie for eighteenth place with Arnold Denker among others. Its performance was marred by hardware and software bugs.