Collaborating Authors

Cognitive Science

Pinaki Laskar on LinkedIn: #machinelearning #artificialintelligence #nlp


AI Researcher, Cognitive Technologist Inventor - AI Thinking, Think Chain Innovator - AIOT, XAI, Autonomous Cars, IIOT Founder Fisheyebox Spatial Computing Savant, Transformative Leader, Industry X.0 Practitioner What are the current AI or machine learning research trends? NLP AI, large neural networks trained for language understanding and generation, the best shortcuts to artificial general intelligence. Large language models, such as PaLM, GLaM, GPT-3, Megatron-Turing NLG, Gopher, Chinchilla, LaMDA, are led by WuDao 2.0 model trained by studying 1.2TB of text and 4.9TB of images using 1.75tn parameters to simulate conversations, understand pictures, write poems and create recipes. It all is relying on unlimited brute force scaling, tens of gigabytes in size and trained on enormous amounts of text data, sometimes at the petabyte scale. The Pathways Language Model (PaLM), a 540-billion parameter, dense decoder-only Transformer model trained with the Pathways system, which enabled us to efficiently train a single model across multiple TPU v4 Pods.

Google Is Close To Achieving True Artificial Intelligence?


DeepMind, a Google-owned British company, might be on the verge of creating human-level artificial intelligence. The revelation was made by the company's lead researcher Dr. Nando de Freitas in response to The Next Web columnist Tristan Greene who claimed humans will never achieve AGI. For anyone who doesn't know, AGI refers to a machine or program that can understand or learn any intellectual task that humans can. It can also do so without training. Addressing the somewhat pessimistic op-ed, and the decades-long quest to develop artificial general intelligence, Dr de Freitas said the game is over.

Defining 'artificial intelligence' for regulation


In the course of the most recent wave of expectation and hype about "artificial intelligence" (AI) -- let's say the last 10 years -- there have been repeated attempts to define what it is. Serious documents such as from academics, governments, or professional bodies typically say that there is no agreed definition and then propose their own or fall back on a well-known one (for example the UK Government used the phrasing: "AI can be defined as the use of digital technology to create systems capable of performing tasks commonly thought to require intelligence"). Popular articles tend not to agonise about it, but use the term to imply something technically advanced or futuristic. But now that governments are crafting laws referring to AI (e.g. the EU's AI Act and the UK National Security and Investment Act 2021) it is beginning to matter a lot. The scope of a law should include neither too much nor too little; be clear which cases fall within it and which do not; be understandable by anyone using it; anyone should be able to easily determine whether a case falls under it, and it should not need continual updating. Consequently, the debate on the scope of the EU AI Act (ongoing at the time of writing) is crucial to the impact of the eventual regulation.

Horses and pigs can distinguish between negative and positive sounds in human speech

Daily Mail - Science & tech

From'Babe' to'Black Beauty', popular culture is constantly telling us that speaking to animals gently and'politely' is the best way to get them to do our bidding. Now a new study has shown the same is true in the real world, as domesticated animals like pigs and horses can tell the difference between negative and positive sounds in human speech. Researchers from the University of Copenhagen's Department of Biology and ETH Zurich found that the animals reacted react more strongly to'negatively charged' human voices. In some cases they even seemed to mirror the emotion expressed in the human voice, according to the researchers. 'That'll do, pig': The findings in the study backs up teachings in films like'Babe' where characters speak politely to their furry companions The stallion in'Black Beauty' goes through many good and bad owners, and researchers have found that this experience could have bearing on the wellbeing of real-life horses Researchers concluded that it is most likely that horses may be able to perceive and interpret each other's sounds by virtue of their common biology.

AI writing has entered a new dimension, and it's going to change education


What happens when robots not only learn to write well, but the tech becomes easily accessible and cheap? As Hal Crawford explains, it'll likely be teachers who feel the effects first. There are two schools of thought when it comes to artificial intelligence: there are the people who have heard of the GPT-3 language model, and then there are those who have heard about it, gone to the OpenAI site, created a guest login and tried it out for themselves. The first group contains people who are wondering what the big deal is. The second group does not. I haven't heard of anyone who's actually used GPT-3 and doesn't think AI is going to change the world profoundly. Education in particular is going to feel its influence immediately.

Emotion recognition AI finding fans among lawyers swaying juries and potential clients


The American Bar Association has taken greater notice of emotional AI as a tool for honing courtroom and marketing performance. It is not clear if the storied group has caught up with the controversy that follows the comparatively new field. On the association's May 18 Legal Rebels podcast, ABA Journal legal affairs writer Victor Li speaks with the CEO of software startup EmotionTrac (a subsidiary of mobile ad tech firm Jinglz) about how an app first designed for the advertising industry reportedly has been adopted by dozens of attorneys. Aaron Itzkowitz is at pains to make clear the difference between facial recognition and affect recognition. At the moment, the use of face biometrics by governments is a growing controversy, and Li would like to stay separate from that debate.

Neuromorphic memory device simulates neurons and synapses: Simultaneous emulation of neuronal and synaptic properties promotes the development of brain-like artificial intelligence


Neuromorphic computing aims to realize artificial intelligence (AI) by mimicking the mechanisms of neurons and synapses that make up the human brain. Inspired by the cognitive functions of the human brain that current computers cannot provide, neuromorphic devices have been widely investigated. However, current Complementary Metal-Oxide Semiconductor (CMOS)-based neuromorphic circuits simply connect artificial neurons and synapses without synergistic interactions, and the concomitant implementation of neurons and synapses still remains a challenge. To address these issues, a research team led by Professor Keon Jae Lee from the Department of Materials Science and Engineering implemented the biological working mechanisms of humans by introducing the neuron-synapse interactions in a single memory cell, rather than the conventional approach of electrically connecting artificial neuronal and synaptic devices. Similar to commercial graphics cards, the artificial synaptic devices previously studied often used to accelerate parallel computations, which shows clear differences from the operational mechanisms of the human brain.

How can artificial intelligence understand time and space?


Time and space are fundamental to the existence of the universe, and human intelligence is our tool for navigating time and space in an appropriate manner. Our ability to see the future is critical. Through evolution, the human brain has evolved into a tool that perceives not only time, place, and things, but our neural network also predicts what will happen in the near future. What kind of path will the stone that you throw take? In which direction does the tree fall?