Collaborating Authors

Symbolic AI: The key to the thinking machine


Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Even as many enterprises are just starting to dip their toes into the AI pool with rudimentary machine learning (ML) and deep learning (DL) models, a new form of the technology known as symbolic AI is emerging from the lab that has the potential to upend both the way AI functions and how it relates to its human overseers. Symbolic AI's adherents say it more closely follows the logic of biological intelligence because it analyzes symbols, not just data, to arrive at more intuitive, knowledge-based conclusions. It's most commonly used in linguistics models such as natural language processing (NLP) and natural language understanding (NLU), but it is quickly finding its way into ML and other types of AI where it can bring much-needed visibility into algorithmic processes. The technology actually dates back to the 1950s, says's



Training a neural network consist of optimizing stochastic functions in a high-dimensional space. In this context, finding a computationally and memory-efficient method with fast convergence properties is hard. On one hand, the objective functions encounter in Deep Learning are in most cases non-convex and non-stationary. On the other hand, gradients can be noisy and/or sparse. For efficient stochastic optimization, Adam, just like RMSProp and AdaGrad, rely only on first-order gradients.

'Pong' is now half a century old


Exactly 50 years ago today, Atari released Pong. It wasn't the first video game ever created, nor the original take on virtual table tennis – a fact that would eventually lead to two decades of lawsuits. But in Pong, the early video game industry was born. Released in 1972, Atari sold more than 8,000 Pong arcade cabinets. A few years later, the home version of Pong would become an instant success, with Sears selling about 150,000 units of the console you needed to play the game.

Brain MRI Data & Machine Learning Models Might Help In Diagnosing ADHD


A technician monitors a brain MRI scan ... [ ] session. Although Attention Deficit Hyperactivity Disorder (ADHD) is one of the most common neurological conditions that affects children and adults, it is still widely misunderstood. ADHD symptoms are commonly misdiagnosed or remain undiagnosed -- particularly among girls and women. In a new study, researchers made a breakthrough by potentially finding a far more robust mechanism through which ADHD diagnosis via brain MRI scans might become a reality in the future. A team of three researchers at the Yale School of Medicine delved into the data from MRI tests that were conducted on 7,805 children based in the United States.

Physics - Machine-Learning Model Reveals Protein-Folding Physics


Proteins control every cell-level aspect of life, from immunity to brain activity. They are encoded by long sequences of compounds called amino acids that fold into large, complex 3D structures. Computational algorithms can model the physical amino-acid interactions that drive this folding [1]. But determining the resulting protein structures has remained challenging. In a recent breakthrough, a machine-learning model called AlphaFold [2] predicted the 3D structure of proteins from their amino-acid sequences.

4 AI research trends everyone is (or will be) talking about


Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Using AI in the real world remains challenging in many ways. Organizations are struggling to attract and retain talent, build and deploy AI models, define and apply responsible AI practices, and understand and prepare for regulatory framework compliance. At the same time, the DeepMinds, Googles and Metas of the world are pushing ahead with their AI research. Their talent pool, experience and processes around operationalizing AI research rapidly and at scale puts them on a different level from the rest of the world, creating a de facto AI divide. These are 4 AI research trends that the tech giants are leading on, but everyone else will be talking about and using in the near future.

Anticipating NYC's anti-bias law, Beamery conducts an internal AI audit - HR Executive


This is not Beamery's first audit of its AI tools. It conducted internal audits to test for compliance with General Data Protection Regulation, the 2016 European Union law that protects consumer identity and privacy. For AI anti-bias audits that fall under the New York City law, Beamery sought to test how its talent acquisition tools handle a potential job candidate's gender and ethnicity during the recruitment process. The first audit took place in the summer followed by a month-long audit in October.

Elon Musk's Neuralink has been 'mutilating and killing monkeys'

Daily Mail - Science & tech

Elon Musk plans to hold a'Show and Tell' event for his brain chip company Neuralink on November 30, but a group of physicians claims the firm is'mutilating and killing monkeys' to create a'brain-machine interface.' Musk announced the event, which the company holds each year to showcase its latest updates, on Twitter. The first Show and Tell in 2020 demonstrated the brain implant in a pig and in 2021, the world saw it used by a monkey that died months after receiving the implant. The Physicians Committee for Responsible Medicine (PCRM) recently launched a website detailing the gruesome stories of monkeys that are said to have suffered from sloppy experiments conducted at the University of California, Davis (UC Davis). PCRM shared lab notes with

Stop Using 0.5 as the Threshold for Your Binary Classifier


To produce a binary response, classifiers output a real-valued score that is thresholded. For example, logistic regression outputs a probability (a value between 0.0 and 1.0); and observations with a score equal to or higher than 0.5 produce a positive binary output (many other models use the 0.5 threshold by default). However, using the default 0.5 threshold is suboptimal. In this blog post, I'll show you how you can choose the best threshold from your binary classifier. We'll be using Ploomber to execute our experiments in parallel and sklearn-evaluation to generate the plots.

MTA to use artificial intelligence tech to keep buses from breaking down - Gothamist


The MTA plans to use artificial intelligence technology to help prevent buses from breaking down on the road. The agency has tested the tech -- from the company Preteckt -- for two years. He said it can flag serious equipment problems long in advance, enabling crews to be more proactive about bus maintenance. Sills said the technology prevents "progressive damage." "Where you have a small issue that can be fixed fairly inexpensively with little amount of time that, if you get ahead of, can prevent you from damaging a very expensive component," he said.