Cognitive Science

AI for 3D Generative Design


Several recent papers have investigated similar ideas as this project; however, none of them captured the specific intent I was aiming for and so I ended up taking inspiration from these models but going in a new general direction. Specifically, I wanted to be able to generate objects from at least 10 different categories (the papers below capture only 2–3) and I wanted to develop the model architecture with the capacity to extend to unlabelled 3D shape data. To produce an encoded knowledge base for this design space I chose to use the PartNet database (a subset of ShapeNet) which has 30k densely annotated 3D models across 24 categories. From these annotations and heuristics on the models, I made simplified text descriptions. From the 3D models, I created 3D voxel volumes (voxels are like pixels in 3D) to represent the model in a way that could then be fed into a neural network architecture.

Short Attention Span Theater: A Quick Look At Quibi's Launch Titles

NPR Technology

Two chefs get blasted in the face with a mystery entree on Dishmantled. Two chefs get blasted in the face with a mystery entree on Dishmantled. The streaming service Quibi -- short for "quick bites" -- calls itself "the first entertainment platform designed specifically for your phone." Perfect for the busy, distracted, on-the-go consumer! Too bad none of us are on-the-going anywhere, these days.

AI With Grove Zero and Codecraft (Scratch 3.0)


The neural network models used in the above application are all run locally in your browser, which has a few distinct advantages as compared to sending the data to the cloud for processing: smaller latency and better privacy. A number of neural networks are used in Cognitive services - Sound Classification for speech commands(, Face Landmark Detection, Face Expression Recognition and Age estimation. There are multiple ways you can build on these examples to make even more fun and exciting applications! If you decide to give it a try,be it with Grove Zero or just using Stage mode, do share in the comments below.

Deep Learning, Knowledge Representation and Reasoning

Journal of Artificial Intelligence Research

The recent success of deep neural networks at tasks such as language modelling, computer vision, and speech recognition has attracted considerable interest from industry and academia. Achieving a better understanding and widespread use of such models involves the use of Knowledge Representation and Reasoning together with sound Machine Learning methodologies and systems. The goal of this special track, which closed in 2017, was to serve as a home for the publication of leading research in deep learning towards cognitive tasks, focusing on applications of neural computation to advanced AI tasks requiring knowledge representation and reasoning.

Why Artificial Intelligence is (Still) Human Intelligence


At its core, Artificial Intelligence and its partner Machine Learning (abbreviated as AI/ML) is math. Specifically, it's probability – the application of weighted probabilistic networks at a computational scale we've never been able to perform before, which allows the computed probabilities to become self-training. It's that characteristic more than any other that makes AI seem like wizardry. The little cylinder on the kitchen counter that suddenly lights up when you call it by name feels like something out of science fiction, but that entire process is the end product of the re-ingestion of new data to help fine-tune a highly complex probabilistic graph. The voice assistant recognizes its "name" not because it's self-aware but because it has been programmed to match an audio waveform to a database of known waveforms with certain characteristics.

Brain Implants and AI Model Used To Translate Thought Into Text


Researchers at the University of California, San Francisco have recently created an AI system that can produce text by analyzing a person's brain activity, essentially translating their thoughts into text. The AI takes neural signals from a user and decodes them, and it can decipher up to 250 words in real-time based on a set of between 30 to 50 sentences. As reported by the Independent, the AI model was trained on neural signals collected from four women. The participants in the experiment had electrodes implanted in their brains to monitor for the occurrence of epileptic seizures. The participants were instructed to read sentences aloud, and their neural signals were fed to the AI model.

Four-year-olds have the same overconfidence as bankers, study says

Daily Mail - Science & tech

Cocky children as young as four have the same levels of overconfidence as city bankers and business leaders, according to a new study. UK researchers demonstrated that high levels of confidence in one's own abilities – a trait common among high achievers – is apparent from an extremely early age. This suggests that cocky city types developed their'cognitive bias' from infancy rather than later life, they say. Researchers conducted a card game with young girls and boys with the objective of collecting as many stickers as possible, and compared their different strategies. More than 70 per cent of four-year-olds and half of five and six-year-olds were overconfident in their expectations - comparable to big shot bankers and traders.

AI fuels research that could lead to positive impact on health care


Brainstorm guest contributor Paul Fraumeni speaks with four York U researchers who are applying artificial intelligence to their research ventures in ways that, ultimately, could lead to profound and positive impacts on health care in this country. Meet four York University researchers: Lauren Sergio and Doug Crawford have academic backgrounds in physiology; Shayna Rosenbaum has a PhD in psychology; Joel Zylberberg has a doctorate in physics. They share two things in common: They focus on neuroscience – the study of the brain and its functions – and they leverage advanced computing technology using artificial intelligence (AI) in their research ventures, the application of which could have a profound and positive impact on health care. In a nondescript room in the Sherman Health Sciences Research Centre, Lauren Sergio sits down and places her right arm in a sleeve on an armrest. It's an odd-looking contraption; the lower part looks like a sling attached to a video game joystick.

A shared vision to advance Human-Centered AI


The session on Toward More General Artificial Intelligence was co-chaired by Asli Celikyilmaz and Chris Manning. We started with a shared reflection on where AI is today. For all of the excitement, AI researchers agree that solutions to date have been quite brittle and narrow in scope and capabilities. Presentations and discussions in this session covered key directions, opportunities, and research investments aimed at overcoming long-term challenges with achieving more general AI capabilities, including research that could enable AI systems to do more effective learning about the world in the wild from unsupervised data, methods for garnering and manipulating large amounts of commonsense knowledge, transferring learnings on one or more tasks to new tasks and new domains, and reasoning about causes and effects. The session on Human-AI Collaboration and Coordination was co-chaired by Ece Kamar and James Landay.

Diet and exercise will keep your brain young – depending on your genes

New Scientist

Will a regular exercise routine and a healthy diet keep your brain young? It depends on your genes. People who have certain forms of genes that play a key role in brain ageing seem to respond better to healthy lifestyle interventions. These make it "more likely that exercise or adhering to a Mediterranean diet will have a greater impact on your cognitive ageing," says Sandrine Thuret at King's College London. Cognitive ageing is thought to rely on neural stem cells in the brain's hippocampus which continue to produce new neurons throughout life, and are thought to play an important role in forming new memories.