Lesson Learned


TripAdvisor's Lessons Learned From Building A Chatbot

#artificialintelligence

That way, even if a user doesn't explicitly ask something, a chatbot could potentially know what to recommend based on their patterns or behavior. Listen to podcast interview with Jeff Chow, VP Product, Consumer Experience, TripAdvisor here. The company's next focus is to increase engagement with its partner businesses to provide better recommendations to customers. When done correctly, chatbots can create great opportunities for an organization.


5 lessons learned from making a banking chatbot

#artificialintelligence

However, I believe that the real power of conversational interfaces will be when people feel free to communicate any broad need. On the technology side, I am building a smarter engagement engine and focusing on the customer journey (more details for this will come in a future post.) Based on knowing a few things about each user (account age, usage activity, engagement, basic demographics), this algorithm would determine the right message and the right time to send it and would also learn based on the response/feedback from the user. I'm continually trying to improve Teller, including building features like bank account integration, adding new messaging channels, and operationalizing deployments.


Large Scale Decision Forests: Lessons Learned - Sift Science Engineering Blog

#artificialintelligence

The way we handle sparse feature for our logistic regression model turns every sparse feature into tens of indicator features, which results in a feature vector with several thousand features. Training time isn't affected since the sufficient statistics required to compute split impurities given this third branch are already collected as part of our regular training algorithm. Effectively, we realized that increasing width led to improved variance, while increasing max depth led to improved bias. Effectively, we realized that increasing width led to improved variance, while increasing max depth led to improved bias.


Lessons Learned from Deploying Deep Learning at Scale

#artificialintelligence

Deep learning is a machine learning technique used to solve complex problems related to image recognition, natural language processing, and more. However, deploying deep learning models in the cloud can be challenging due to complex hardware requirements and software dependencies. And, more importantly, once you've picked a framework and trained a machine-learning model to solve your problem, how to reliably deploy deep learning frameworks at scale. We've created an open marketplace for algorithms and algorithm development, making state-of-the-art algorithms accessible and discoverable by everyone.


Shifting from Big Data to Machine Learning: Lessons Learned

#artificialintelligence

Arvid Tchivzhel, Director of Product Development, Mather Economics, Arvid Tchivzhel, a director with Mather Economics oversees the delivery and operations for all Mather Economics consulting engagements, along wit... Before plunging into the world of machine learning, firms should pause and learn from the mistakes made in the implementation of Big Data projects over the last five years. History can be our guide Big Data projects returned 55 cents for every dollar spent, and the three primary reasons for an underwhelming ROI is lack of skilled practitioners and immature technology and a lack of compelling business cases. The role of a data scientist includes technical proficiency to manipulate data using SQL, noSQL and ETL tools, broad knowledge of statistical techniques and predictive modeling (the core of machine learning), plus softer skills to visualize and present the data and output.


Download Machine Learning White Paper: Practical Lessons Learned from the 1M Netflix Prize

#artificialintelligence

Netflix spent 1 million for a machine learning and data mining competition called Netflix Prize to improve movie recommendations by crowdsourced solutions, but couldn't use the winning solution for their production system in the end.


Lessons Learned While Developing Machine Learning Products

#artificialintelligence

So think about how to build the whole product with the following in mind: Is the data that's coming out of this product going to be good training data? That, I think, is something that should always be on the table. Some examples of how to do it right would be giving users opportunities to correct errors when they occur, and making sure that it's done in the flow of the product. It should be presented in a way that the user feels like it's going to provide value, because they are helping to fix the product and helping to improve it for their own benefit.


CSC321 Winter 2015: Introduction to Neural Networks

@machinelearnbot

No lectures, tutorials, or office hours will be held for the duration of the CUPE 3902 Unit 1 strike. There is no class during reading week, but we will still hold office hours. There are no office hours on Monday, since it is a holiday. No lectures, tutorials, or office hours will be held for the duration of the CUPE 3902 Unit 1 strike.


Large Scale Decision Forests: Lessons Learned

#artificialintelligence

To do this, we have devised a specialized modeling stack that is able to adapt to individual customers while simultaneously delivering a great out-of-box experience for new customers, achieved by mixing the output from a "global" model – trained on our entire network of data – with the output from a customer's individualized model. We wanted our global model to be expressive enough to easily model non-linearities in our feature space, but we also needed to retain the ability to explain our global model's predictions to our customers in a straightforward manner. Fast forward several months and 100 experiments later, we now have a global decision forest model working as a productive member of our modeling stack. The way we handle sparse feature for our logistic regression model turns every sparse feature into tens of indicator features, which results in a feature vector with several thousand features.


Lesson learned from Amazon Echo: Don't make customers into developers

ZDNet

Amazon releases Smart Home API for Alexa: Developers, get ready to add'skills' Amazon's Alexa line of products will soon get new home automation skills thanks to the release of an API by the company. Ideally, you should be able to say something along the lines of "Alexa, bedtime" and it should loop a designated playlist or white noise track that you can pick from a pre-populated list within the Amazon Alexa smartphone app or preview using a voice command. There seems to be a trend with IoT to want to make your end-users into your core developer ecosystem. Amazon Alexa clearly needs some kind of click-and-choose macro interface similar to IFTTT that allows similar types of function recipes, such as my "Bedtime" command to be executed.