nivargi
Fake It to Make It: Companies Beef Up AI Models With Synthetic Data
Companies rely on real-world data to train artificial-intelligence models that can identify anomalies, make predictions and generate insights. To detect credit-card fraud, for example, researchers train AI models to look for specific patterns of known suspicious behavior, gleaned from troves of data. But unique, or rare, types of fraud are difficult to detect when there isn't enough data to support the algorithm's training. To get around that, companies are learning to fake it, building so-called synthetic data sets designed to augment training data. At American Express Co., machine-learning and data scientists have been experimenting with synthetic data for nearly two years in hopes of improving the company's AI-based fraud-detection models, said Dmitry Efimov, head of the company's Machine Learning Center of Excellence. The credit-card company uses an advanced form of AI to generate fake fraud patterns aimed at bolstering the real training data.
- Information Technology (1.00)
- Banking & Finance (0.96)
- Law Enforcement & Public Safety > Fraud (0.78)
Chatbots Are Machine Learning Their Way To Human Language
Moveworks founding team from left to right Vaibhav Nivargi, CTO; Bhavin Shah, CEO; Varun Singh, VP ... [ ] of Product; Jiang Chen, VP of Machine Learning. Computers and humans have never spoken the same language. Over and above speech recognition, we also need computers to understand the semantics of written human language. We need this capability because we are building the Artificial Intelligence (AI)-powered chatbots that now form the intelligence layers in Robot Process Automation (RPA) systems and beyond. Known formally as Natural Language Understanding (NLU), early attempts (as recently as the 1980s) to give computers the ability to interpret human text were comically terrible.
Chatbots Are Machine Learning Their Way To Human Language
Moveworks founding team from left to right Vaibhav Nivargi, CTO; Bhavin Shah, CEO; Varun Singh, VP ... [ ] of Product; Jiang Chen, VP of Machine Learning. Computers and humans have never spoken the same language. Over and above speech recognition, we also need computers to understand the semantics of written human language. We need this capability because we are building the Artificial Intelligence (AI)-powered chatbots that now form the intelligence layers in Robot Process Automation (RPA) systems and beyond. Known formally as Natural Language Understanding (NLU), early attempts (as recently as the 1980s) to give computers the ability to interpret human text were comically terrible.
Three Tricks to Amplify Small Data for Deep Learning
It's no secret that deep learning lets data science practitioners reach new levels of accuracy with predictive models. However, one of the drawbacks of deep learning is it typically requires huge data sets (not to mention big clusters). But with a little skill, practitioners with smaller data sets can still partake of deep learning riches. Deep learning has exploded in popularity, with good reason: Deep learning approaches, such as convolutional neural networks for computer (used primarily for image data) and recurrent neural networks (used primarily for language and textual data) can deliver higher accuracy and precision compared to "classical" machine learning approaches, like regression algorithms, gradient-boosted trees, and support vector machines. But that higher accuracy comes at a cost.