Data Mining


How AI and Intelligent Automation Impact HR Practices - InformationWeek

#artificialintelligence

Today's businesses must continually improve process efficiencies to stay competitive. Given the high cost of human labor, they are turning to AI and intelligent automation technologies to lower costs and increase ROI. While there's considerable anxiety in some parts of the workforce about the impact of AI and intelligent automation, experts say workers and their employers are happier when humans focus on what they do best. Forrester Research has defined a six-level maturity model that describes where companies fit along the automation spectrum. At the lower levels, companies are experimenting and piloting technology.


Is big data a big deal?

#artificialintelligence

Their prediction is that people will still use money as a store of value and a way to transact, but that "rich data flows" will eventually "replace, or at the very least complement, the informational role of money". It means established institutions like banks will be forced to compete against "savvy new entrants" that use machine learning and artificial intelligence technologies to exploit an ever-growing mass of information to gain a competitive advantage. You can already see many of these trends in the economy. Amazon has captured market share by mining enormous data sets and targeting consumers by precisely matching their preferences. Users of Google and Facebook do the same sort of thing, supposedly for free, though they're actually trading their personal data to advertisers.


Zillow Data Science Interview Questions -- Acing the AI Interview

#artificialintelligence

My AI Interview Questions articles for Microsoft, Google, Amazon, Netflix, LinkedIn, Ebay, Twitter, Walmart, Apple, Facebook, Salesforce and Uber have been very helpful to the readers. As a followup, next couple of articles were on how to prepare for these interviews split into two parts, Part 1 and Part 2. If you want to find suggestions on how to showcase your AI work please visit Acing AI Portfolios. Zillow is a gigantic spatial database. The GIS team within Zillow works on interesting problems like spatial ETL, normalization of geospatial data and establishing geo-spatial relationships between data points. Very few companies in the world have these kind of problems to solve.


How Conservationists Are Using AI And Big Data To Aid Wildlife

#artificialintelligence

Give Jason Holmberg 10,000 zebra photos and he'll find the specific individual zebra you're looking for, no problem. "It could take two minutes," he said. Holmberg won't personally sort through the photos -- it's his software that will. Holmberg is executive director of the nonprofit Wild Me. The Portland-based organization has developed a digital tool called Wildbook that uses artificial intelligence and machine learning to expedite wildlife identification.


The 10 Statistical Techniques Data Scientists Need to Master

#artificialintelligence

Regardless of where you stand on the matter of Data Science sexiness, it's simply impossible to ignore the continuing importance of data, and our ability to analyze, organize, and contextualize it. Drawing on their vast stores of employment data and employee feedback, Glassdoor ranked Data Scientist #1 in their 25 Best Jobs in America list. So the role is here to stay, but unquestionably, the specifics of what a Data Scientist does will evolve. With technologies like Machine Learning becoming ever-more common place, and emerging fields like Deep Learning gaining significant traction amongst researchers and engineers -- and the companies that hire them -- Data Scientists continue to ride the crest of an incredible wave of innovation and technological progress. While having a strong coding ability is important, data science isn't all about software engineering (in fact, have a good familiarity with Python and you're good to go).


Moneyball for business: How AI is changing talent management

#artificialintelligence

The online games were easy–until I got to challenge number six. I was applying for a job at Unilever, the consumer-goods behemoth behind Axe Body Spray and Hellmann's Real Mayonnaise. I was halfway through a series of puzzles designed to test 90 cognitive and emotional traits, everything from my memory and planning speed to my focus and appetite for risk. A machine had already scrutinized my application to determine whether I was fit to reach even this test-taking stage. Now, as I sat at my laptop, scratching my head over a probability game that involved wagering varying amounts of virtual money on whether I could hit my space bar five times within three seconds or 60 times within 12 seconds, an algorithm custom-built for Unilever analyzed my every click. I furiously stabbed at my keyboard, my chances of joining one of the world's largest employers literally at my fingertips.


Salesforce open-sources TransmogrifAI, the machine learning library that powers Einstein

#artificialintelligence

Data scientists spend weeks and months not only preprocessing the data on which the models are to be trained, but extracting useful features (i.e., the data types) from that data, narrowing down algorithms, and ultimately building (or attempting to build) a system that performs well not just within the confines of a lab, but in the real world. Salesforce's new toolkit aims to ease that burden somewhat. On GitHub today, the San Francisco-based cloud computing company published TransmogrifAI, an automated machine learning library for structured data -- the kind of searchable, neatly categorized data found in spreadsheets and databases -- that performs feature engineering, feature selection, and model training in just three lines of code. It's written in Scala and built on top of Apache Spark (some of the same technologies that power Salesforce AI platform Einstein) and was designed from the ground up for scalability. To that end, it can process datasets ranging from dozens to millions of rows and run on clustered machines on top of Spark or an off-the-shelf laptop.


AI for code encourages collaborative, open scientific discovery

#artificialintelligence

We have seen significant recent progress in pattern analysis and machine intelligence applied to images, audio and video signals, and natural language text, but not as much applied to another artifact produced by people: computer program source code. In a paper to be presented at the FEED Workshop at KDD 2018, we showcase a system that makes progress towards the semantic analysis of code. By doing so, we provide the foundation for machines to truly reason about program code and learn from it. The work, also recently demonstrated at IJCAI 2018, is conceived and led by IBM Science for Social Good fellow Evan Patterson and focuses specifically on data science software. Data science programs are a special kind of computer code, often fairly short, but full of semantically rich content that specifies a sequence of data transformation, analysis, modeling, and interpretation operations.


Predicting Customer Churn with IBM Watson Studio

#artificialintelligence

Business leaders understand the advantage of using the power of artificial intelligence and machine learning to stay ahead of their competitors. However, understanding the power of AI is a lot different than actually successfully implementing it in companies. For example, in 2017, Gartner estimated that Big Data projects have a success rate of only 15%. While organizational factors may be a primary reason for this poor success rate, another reason for such a high failure rate could be due to a lack of AI / Machine Learning talent needed to successfully pursue these types of projects. Specifically, it's been shown that there is a lack of advanced machine learning talent among data professionals; less than 20% of surveyed data professionals said they were competent in such areas as Natural Language Processing (19%), Recommendation Engines (14%), Reinforcement Learning (6%), Adversarial Learning (4%) and Neural Networks – RNNs (15%).


An Introduction to t-SNE with Python Example

#artificialintelligence

I've always had a passion for learning and consider myself a lifelong learner. Being at SAS, as a data scientist, allows me to learn and try out new algorithms and functionalities that we regularly release to our customers. Often times, the algorithms are not technically new, but they're new to me which makes it a lot of fun. Recently, I had the opportunity to learn more about t-Distributed Stochastic Neighbor Embedding (t-SNE). In this post I'm going to give a high-level overview of the t-SNE algorithm.