If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Olist is the largest eCommerce website in Brazil. It connects small retailers from all over the country to sell directly to customers. The business has generously shared a large dataset containing 110k orders on its site from 2016 to 2018. The SQL-style relational database includes customers and their orders in the site, which contains around 100k unique orders and 73 categories. It also includes item prices, timestamps, reviews, and gelocation associated with the order.
One of the most popular ways to build ensembles is to use the same algorithm multiple times but on the different subsets of the training dataset. Techniques that are used for this are called bagging and pasting. The only difference in these techniques is that while building subsets bagging allows training instances to be sampled several times for the same predictor, while pasting is not allowing that. When all algorithms are trained, the ensemble makes a prediction by aggregating the predictions of all algorithms. In the classification case that is usually the hard-voting process, while for the regression average result is taken.
Even before countries began rolling out their vaccination campaigns, Pfizer, Moderna and AstraZeneca's announcements had already proved fortifying shots. Stocks rallied and healthcare workers celebrated in the wake of the vaccine news late last year. But months on, that early euphoria has evaporated, replaced by uncertainty and debate over vaccine safety, possible side effects and varying degrees of citizen reluctance. Artificial intelligence (AI) researchers and health experts modeling COVID-19's spread have warned that for vaccines to be useful in curbing the pandemic, a significant percentage of the population must be vaccinated to reach herd immunity. But, as SMU's Vice Provost of Research Professor Archan Misra pointed out at an AI-centered panel discussion, held in conjunction with the SMU- Global Young Scientists Summit (GYSS) on 15 January 2021, from a purely self-interested point of view, each person would be best served if all the others got vaccinated and they themselves did not have to vaccinate--because that would stop the spread of the virus without their having to take on the possible risks of side effects. To account for these considerations, Professor Misra explained, the most powerful AI-based epidemiology models actually need to incorporate concepts from the behavioral sciences and game theory.
The innovations we make today are going to impact posterity in many ways. But is it necessary to keep technology and innovations away from our young kids? In fact, these young minds can become creators and thinkers by learning various technologies and computational skills. Don't you think the youth and kids should be engaged more with coding, robotics, and AI? An ISTE article Keri Gritt, technology co-ordinator at St. Stephen's and St.Agnes school in Virginia, who teaches coding to kindergartners says, "Before using programs or robots, she ties sequencing and commands to physical movement by having students follow a program listed with cards on a whiteboard, starting and stopping with begin and end commands. Students then write programs to guide a peer across the room, making turns and avoiding obstacles. They then move on to robots."
"The technologies necessary for large-scale surveillance are rapidly maturing, with techniques for image classification, face recognition, video analysis, and voice identification all seeing significant progress in 2020." The figure shows the progress in the top-1 accuracy of the ImageNet challenge, a benchmark for image classification.
This article was originally published by Industry Today on March 3, 2021, and is reproduced below in full with permission. With rapid changes, pressure to innovate, and acceleration of implementation of advanced technology across all stages of the supply chain over the past year, there are important intellectual property (IP) considerations that companies need to make to protect their inventions. Leading edge tech like Augmented and Virtual Reality, machine learning and Artificial Intelligence, and 3D printing have become integral to business success yet continue to cause confusion around how the technology should be patented. This article explores some of the nuances as they relate to the art of protecting the software that fuels the base technology of these advanced innovations and important considerations that need to be made in the current environment. Most machine learning (ML) and artificial intelligence (AI) innovations are generally based in computer software. While courts and the U.S. Patent and Trademark Office ("U.S. PTO") have established limits on the ability to patent computer software, it is still possible to obtain meaningful, broad, and valuable patent protection on computer software.
Last year alone, the ed-tech market raked in more than US$10 billion in venture capital investment globally in 2020, on the back of heavy adoption when schools and higher education centers shuttered because of the pandemic. Statistics however suggest that education is still grossly under digitized, with less than 4% of global expenditure on tech, presenting a serious challenge given the scale of what's to come. The knowledge economy and future skills require massive digital transformation, and, while accelerated through Covid-19 there is still far to go.
Advances in hardware and AI and cloud computing technologies have made edge computing cheaper and more useful than ever before. By embedding AI in edge devices, enterprises can infuse once "dumb" electronics with smart capabilities. Live-feed cameras can now identify human faces or scan license plates, for example, and vehicles can move autonomously. Gartner recently predicted that by 2025, enterprises will create and process about 75% of their data at the edge. Technology vendors appear to be moving quickly in that direction.
In this post, I will show you how easy it is to use other state-of-the-art algorithms with PyCaret thanks to tune-sklearn, a drop-in replacement for scikit-learn's model selection module with cutting edge hyperparameter tuning techniques. I'll also report results from a series of benchmarks, showing how tune-sklearn is able to easily improve classification model performance. Hyperparameter optimization algorithms can vary greatly in efficiency. Random search has been a machine learning staple and for a good reason: it's easy to implement, understand and gives good results in reasonable time. However, as the name implies, it is completely random -- a lot of time can be spent on evaluating bad configurations.
In this issue: we look at Neural Architecture Search (NAS) and how it relates to AutoML; we explain the research paper “A Survey on Neural Architecture Search” and how it helps to understand NAS; we speak about Uber’s Ludwig toolbox that lowers the entry point for developers by enabling the training and testing of ML models that can be done without writing code.