If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
By using supervised machine learning, a BPM tool could find valuable patterns in data and automate business processes. In the past, the goals of digital transformation were met primarily with business process management (BPM) tools, which aim to help companies orchestrate resources, route work to the right people, automate routine manual tasks, and enable self-service where none existed before. The idea is that by connecting AI to existing BPM tools, and delivering the data generated by digitized processes to AI systems, companies could do even more work to cut human latency (and thus costs) out of processes while also delivering a better end product to customers. By eliminating the manual processes from determining the best targets for marketing, the BPM tool allows human employees to focus on more complex processes that drive more productivity and revenue, such as fine-tuning those personalized campaigns for higher conversion rates.
This company has developed a new anti-cancer drug (against pancreatic, breast, liver or brain cancer) called BPM 31510, which has been discovered by an algorithm. The major technology companies are using millions of people data to find treatments. In addition to the start-ups, all major technology companies have already begun to apply Big Data and artificial intelligence to the service of health. Big Data and artificial intelligence, combined with genetic analysis, allow researchers to search for and find patterns among patients with rare diseases, who may be separated by distance but carry the same mutation.
Data science and machine learning HIMSS17 exhibitor profiles mentioning data are prolific (545, at the time I write this). For example, the following is a typical data science workflow: load data, understand data, create data objects, train model, make predictions, and compute error. By automating data science and machine learning tasks, by extracting variables from the workflows and managing those variables using workflow, data flow, or pipeline management technology, results become more consistent, accurate, valuable, reusable, and shareable. There is even a new machine learning workflow language, called WhizzML, to better achieve these important goals of automating machine learning workflows.
It's good to know the context: What is the difference between Data Analytics, Data Analysis, Data Mining, Data Science, Machine Learning, and Big Data? Anaconda is popular in Data Science and Machine Learning communities. First, download this podcast episode where Knowledge Project interviews Prof. Domingos, who wrote the paper we read earlier. For now, the best StackExchange site is stats.stackexchange.com There are also many relevant discussions on Quora, for example: What is the difference between Data Analytics, Data Analysis, Data Mining, Data Science, Machine Learning, and Big Data?
Recent digital tech advancements have produced prototype artificial neurons and light-based neural networks, but we're still discovering ways our brain actually works. Researchers at MIT have built a computational model that could illustrate how inhibitory neurons work efficiently to block others from firing. The team's model, as described in their paper, uses theoretical computer science applied to a "winner-take-all" operation. NEC Professor of Software Science and Engineering at MIT Nancy Lynch led the team, which will present their results at this week's Innovations in Theoretical Computer Science conference.
The team of academics, led by professor Nello Cristianini, collaborated closely with the company findmypast, which is digitising historical newspapers from the British Library as part of their British Newspaper Archive project. Over 35 million articles and 28.6 billion words - around 14 per cent of local newspapers from 1800-1950 - were used for the study, which aimed to establish whether major historical and cultural changes could be detected from statistical footprints in the content of the local papers. Professor Nello Cristianini, professor of AI from the department of engineering and mathematics at Bristol, said the study aimed to: "demonstrate an approach to understanding continuity and change in history, based on the distant reading of a vast body of news, which complements what is traditionally done by historians." Tom Lansdall-Welfare, research associate in machine learning in the department of computer science, who led the computational part of the study, said: "We have demonstrated that computational approaches can establish meaningful relationships between a given signal in large-scale textual corpora and verifiable historical moments."
Artificial Intelligence: Upping the Ante," beginning Jan. 11 at Rivers Casino, poker pros will play a collective 120,000 hands of Heads-Up No-Limit Texas Hold'em over 20 days against a CMU computer program called Libratus. "Since the earliest days of AI research, beating top human players has been a powerful measure of progress in the field," said Tuomas Sandholm, professor of computer science. "We were thrilled to host the first Brains Vs. AI competition with Carnegie Mellon's School of Computer Science at Rivers Casino, and we are looking forward to the rematch," said Craig Clark, general manager of Rivers Casino. "Since the earliest days of AI research, beating top human players has been a powerful measure of progress in the field," said CMU Computer Science Professor Tuomas Sandholm.
Even if it may be hidden behind polished marketing speak pushed by major vendors and research firms (e.g., "Cognitive Computing", "Machine Intelligence", or even doomsday-like "Smart Machines"), the Machine Learning genie is out of the bottle without a doubt as its wide-ranging potential across the enterprise has already made it part of the business lexicon. They will keep investing in algorithm-based startups with marketable academic resumes while perpetuating myths and creating further confusion e.g., portraying Machine Learning as synonymous with Deep Learning, completely misrepresenting the differences between Machine Learning algorithms and Machine-learned models or model training and predicting from trained models1. On a slightly more positive note, a small subset of the VC community seems to be waking up to the huge platform opportunity Machine Learning presents. Legacy company executives that opt for getting expensive help from consulting companies in forming their top-down analytics strategy and/or making complex "Big Data" technology components work together before doing their homework on low hanging predictive use cases will find that actionable insights and game-changing ROI will be hard to show.
After weeks of playing around, I have a good grasp of what it takes to separate videos into individual images, applying the Algorithmia machine learning filters, and reassembling them as videos. I also have several of my own texture filters created now using the AWS AMI and process provided Algorithmia -- you can learn more about algorithmic rotoscope, and details of what I did via the GitHub project updates. Well, first of all, it uses the Algorithmia API, but I also developed the separation of the videos, applying filters to images, and reassembling the videos as an API. Next, I am going to write-up Algorithmia's business model, using my algorithmic rotoscope work as a hypothetical API-driven business -- helping me think through the economics of building a SaaS or retail API solution on top of Algorithmia.
Let's take the work Google DeepMind is undertaking with University College London Hospital NHS Foundation Trust. By applying its DeepMind artificial intelligence to the CT and MRI scans of 700 former cancer patients, Google's technology will quickly distinguish healthy from cancerous tissue. This is where the addition of Deep Learning and Machine Learning (intrinsic parts of Data Science) comes into play. Removing all personal details, for example, means anonymised patient records along with past and present data treatment assessments enable us to analyse and improve our understanding of future treatments.