Goto

Collaborating Authors

Dimensionality Reduction in Machine Learning

#artificialintelligence

Dimensionality reduction, or dimension reduction, is the transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data.


Machine Learning and Signal Processing

#artificialintelligence

Signal processing has given us a bag of tools that have been refined and put to very good use in the last fifty years. There is autocorrelation, convolution, Fourier and wavelet transforms, adaptive filtering via Least Mean Squares (LMS) or Recursive Least Squares (RLS), linear estimators, compressed sensing and gradient descent, to mention a few. Different tools are used to solve different problems, and sometimes, we use a combination of these tools to build a system to process signals. Machine Learning, or the deep neural networks, is much simpler to get used to because the underlying mathematics is fairly straightforward regardless of what network architecture we use. The complexity and the mystery of neural networks lie in the amount of data they process to get the fascinating results we currently have.


Hyun Kim, CEO and Co-Founder, Superb AI – Interview Series

#artificialintelligence

Huyn Kim is the CEO and Co-Founder of Superb AI, a company that provides a new generation machine learning data platform to AI teams so that they can build better AI in less time. The Superb AI Suite is an enterprise SaaS platform built to help ML engineers, product teams, researchers and data annotators create efficient training data workflows. What initially attracted you to the field of AI, Data Science and Robotics? As an undergraduate majoring in Biomedical Engineering at Duke, I was passionate about genetics and how we can engineer our DNA to cure diseases or create genetically engineered organisms. I remember one wet-lab experiment distinctly that kept failing for like 6 months straight. The most frustrating part of it was that there was a lot of repetitive manual work, and in hindsight that was probably the root of some many potential errors.


Primates have evolved larger voice boxes than other mammals to help with social interactions

Daily Mail - Science & tech

Humans and other primates have evolved'significantly larger' voice boxes than other mammals to help with social interactions, a new study shows. Compared with other mammals such as cats, the voice box, or larynx, of primates such as gorillas and chimpanzees is more than a third larger in relation to their body size. They also found that primates' voice boxes undergo faster rates of evolution, and are diverse in function and more variable in size. Researchers made CT-scans of specimens from 55 different species, including primates and other mammals, and produced 3D computer models of their larynges. The research claims to be the first large-scale study into the evolution of the larynx, where tissue vibrations produce sounds for vocal communication.


Artificial intelligence examines best ways to keep parolees from recommitting crimes

#artificialintelligence

Purdue professors are trying to eliminate a return to prison for recently released criminals by using artificial intelligence research to lower the rate at …


"The 'AI' Cosmos" –Intelligent Algorithms Begin Processing the Universe

#artificialintelligence

If artificial intelligence can search for alien life, it should be able to … about big data captured this June and the power of machine learning that led to …



Edge2Learn Launches Artificial Intelligence Training for Multifamily Industry

#artificialintelligence

11, 2020 /PRNewswire/ — Edge2Learn, an e-learning company specializing in the multifamily industry, today announced the introduction of artificial …


Zignal Labs Selects Lexalytics to Provide AI-Based NLP

#artificialintelligence

Lexalytics, the leader in "words-first" machine learning and artificial intelligence, announced that Zignal Labs, creator of the Impact Intelligence platform for measuring the evolution of opinion in real time, has chosen Lexalytics to extend its natural language processing (NLP) and text analytics capabilities to help marketers, communicators and analysts gain a greater understanding of perceptions across traditional and social media. Zignal Labs has incorporated Lexalytics' on-premises Salience engine to analyze media in real time, across multiple industries, including financial services, technology, healthcare, consumer products, sports, entertainment and more. With Lexalytics, Zignal's customers can understand what people are saying about products, services or current events, categorize discussions into separate groupings and themes, and evaluate the sentiment of media coverage across multiple languages. "With more people working from home, and the increase in online discourse caused by the COVID-19 crisis and social justice movements, we've seen an explosion in the amount of content we're analyzing for our customers in all parts of the world and had a need to expand our NLP capabilities for international languages," said Jonathan Dodson, CTO of Zignal Labs. "We chose Lexalytics because out of all of the market leaders we evaluated, they have the best combination of accuracy and performance, breadth of foreign language capabilities, scale and price, as well as an on-premises solution, offering maximum tuning and features while keeping data processing costs to a minimum."


Data science's ongoing battle to quell bias in machine learning

#artificialintelligence

Bias in machine learning models can lead to false, unintended and costly outcomes for unknowing businesses planning their future and victimized individuals planning their lives. This universal and inherent problem and the techniques to solve it weigh on the minds of data scientists working to achieve fair, balanced and realistic insights. Planner, builder, tester and manager of machine learning models, Benjamin Cox contends daily with the issues surrounding bias in machine learning and its deleterious effects. As director of product marketing at H2O.ai, Cox leads responsible AI research and thought leadership. He also co-founded the AI innovation lab at Ernst & Young and has led teams of data scientists and machine learning engineers at Nike and NTT Data.