If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Imagine a machine learning algorithm is tasked with identifying the number of bananas within a bowl of fruit. In total, the bowl contains 10 pieces of fruit, 4 of which are bananas, and 6 are apples. The algorithm determines that there are 5 bananas, and 5 apples. The number of bananas that were counted correctly are known as true positives, while the items that were identified incorrectly as bananas are called false positives. In this example, there are 4 true positives, and one false positive, making the algorithms precision 4/5, and its recall is 4/10.
This October, join us for a lively discussion as project leads and contributors from across the world share their work on exciting projects that are using cutting-edge AI and machine learning technology to drive forward open science and research communication. It's been four months since the last call on tools for reproducible research, which covered among others the Reproducible Document Stack project for authoring and publishing reproducible articles at scale; Binder, a community-driven tool for turning reproducible notebooks into executable environments; and Idyll, a markup language for the authoring and publishing of interactive narratives. This upcoming call will focus on tools and platforms that make use of recent advances in AI and machine learning. Amongst the updates, we will hear more about PeerTax, an investigation into peer review report taxonomy using Natural Language Processing (NLP); Open Knowledge Maps, a visualisation tool that provides instant topical overviews; SeerSuite, a framework for scientific digital libraries and search engines built by crawling scientific documents from the web; and InstruMinetal, a project started at the eLife Innovation Sprint to automatically extract equipment data from the literature. Whether you have developments to share, or simply would like to listen in to hear what's new, please register here.
The MLOps Conference took place earlier this week at Hudson Mercantile in New York City. Experts from the New York Times, Twitter, Netflix and Iguazio, the host company, spoke about best practices and machine learning implementation throughout a variety of different organizations. I learned of the technological void that exists when data scientists want to implement machine learning. With this new context in mind, I can approach conversations with our data team from a new perspective, and take the time to understand how we can implement new models on our team. Machine learning as a technology has been around for more than 50 years, beginning with Arthur Samuel's pioneering work at IBM where his program helped the computer improve with each game of checkers it played in 1952.
Last March, Google took the wraps off of Coral, a collection of hardware development kits and accessories intended to bolster the development of machine learning models at the edge. It launched in select regions in beta, but the tech giant today announced that it's graduating to a "wider" and global release. All Coral products -- including the $150 Coral Dev Board, the $74.99 Coral USB Accelerator, and the $24.99 5-megapixel camera accessory -- are available for sale at electronics retailer Mouser and for large-volume sale through Google's sales team. The company says that by the end of the year, it'll expand distribution into new markets including Taiwan, Australia, New Zealand, India, Thailand, Singapore, Oman, Ghana, and the Philippines. Coinciding with Coral's general availability, the Coral website -- which now lives at Coral.ai -- has been revamped with better organization for docs and tools, testimonials, and "industry-focused" pages.
In this post, we will share with you the 11 most recommended books in computer vision. This would be divided in 5 theoretical and 6 practical books. Note: This is not in particular order. You should note that most of the books that are here contain a lot of theoretical concepts, focusing on the mathematics behind computer vision. If you getting into computer vision it is recommended to get the theoretical knowledge before jumping right into the practical part.
"Any AI smart enough to pass a Turing test is smart enough to know to fail it." Suppose you are working on a high-impact yet challenging problem of malware classification. You have a large dataset at your disposal and are able to train a machine learning classifier with an accuracy of 98%. While suppressing your excitement, you convince the team to deploy the model, as who would resist a model with such an amazing performance? Quite disappointingly, the model fails to detect threats in the real world!?
To highlight these largely unsung benefits, gaming PC maker HP Omen and creative agency Wieden Kennedy Shanghai created a demonstration using a tool called The Gamewaves Scanner at ChinaJoy, China's largest digital entertainment expo. The exhibit used cutting-edge technology to measure brain activity in real time, showcasing how people respond to moments that require teamwork, responsiveness, mental stamina, focus and memory. The experience was titled Achieving Gamefulness, and it was the start of an integrated campaign set to launch across China. To synthesize their findings for a wider audience, HP Omen and W K Shanghai created a trio of 30-second shorts: "Achieve Mental Stamina," "Achieve Teamwork" and "Achieve Focus." The arc features a troop of gamers being led by supreme humans called The Masters as they hope to achieve the highest level of enlightenment.
DUBLIN--(BUSINESS WIRE)--Public-service executives in Europe are optimistic and enthusiastic about the impact of artificial intelligence (AI) on government operations and services but face challenges implementing the technology, according to a study issued today by Accenture (NYSE: ACN). The study -- based on a survey of 300 government leaders and senior information technology (IT) decision-makers in Finland, France Germany, Norway and the U.K.-- found that the vast majority (90%) of respondents believe that AI will have a high impact on their organizations over the coming years. In addition, nearly the same number (86%) said that their organization plans to increase its spending on AI next year. Customer service and fraud & risk management are the two operational areas favored most for public service AI deployments, cited by 25% and 23% of respondents, respectively. In addition, respondents most often cited increased efficiencies, cost or time savings, and enhanced productivity as the greatest anticipated benefits from their AI investments.
A couple of years ago I wrote about Google's AI camera, Google Clips ($250). This is a device that you plonk on the kitchen table (or hang around your neck), and which automatically takes photos whenever your favourite landscape, child or pet steps into frame. I compared it with a Nikon DSLR ($3,300) and proclaimed that AI devices would consume the lower end of the market while creative photographers would cling on to their interchangeable lenses. On reflection I'm not surprised by the news this week that Google Clips has been withdrawn. As we all know, AI relies heavily on machine learning, which requires huge volumes of data and experiments to accurately predict everything from eye disease to your next favourite artist on Spotify.