If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
British stars Emma Raducanu and Andy Murray are among the British players kicking off their Wimbledon campaigns on Centre Court today, marking the first day of this year's hotly-anticipated tennis Championships. For any tennis player, a match on Centre Court is a highlight of the annual calendar – not only is it the biggest stage at the world's most prestigious tennis tournament, but a chance to perform in front of distinguished guests, including the Royal family. However, whether or not it's the best place to secure a win for homegrown tennis stars is another matter. Researchers at IBM, the official technology partner of The Championships, have trawled through 21 years of data to find the Wimbledon court with the best British win percentage so far this century. The data covers all Gentlemen's and Ladies' singles matches on all Wimbledon courts going back to 2000, when IBM's records start, captured using its IBM Watson AI software.
Many embedded systems challenges can't be resolved with rules-based programming. Increasingly, developers are turning to machine learning (ML) to tackle this issue. Elektor met up with Edge Impulse's Jan Jongboom, Amir Sherman, and Arun Rajasekaran at Embedded World 2022 to learn more about how the Edge Impulse ecosystem streamlines the process of using edge-based ML in products of all types. The discussion covers the implementation process from learning data evaluation to model creation; their partnerships with microcontroller vendors to support new products at launch; and the amazing tech solutions that Edge Impulse enables. And, to show what it can do, we took a look at a new Renesas MPU, the R2/V2L, whose AI-accelerator enables vision recognition at 150 frames-per-second.
Lexy Kassan is a thought leader for doing good things with data. She oversees and guides customers in transforming their data culture to best position for creating value out of technology and analytics investments. She has seen first-hand the power of empowering people with the right tools and just enough governance to optimise processes and achieve business outcomes. Data has an increasing role to play in our lives and societies and Lexy is actively involved in shaping that role. She is a thought leader in data and AI ethics, Founder and Host of the Data Science Ethics Podcast, and a frequent speaker and guest lecturer on the topic at institutions around the world.
Data anonymization is the process of mitigating direct and indirect privacy risks within data, such that there is a measurable way to ensure records cannot be attributed to a specific individual or entity. With an estimated 2.5 quintillion bytes of data being generated every day and an increasing reliance on data to power new applications, machine learning models and AI technologies, the importance of implementing effective anonymization techniques and removing any bottlenecks is crucial to accelerating future developments and innovations. This post is a general introduction to anonymization, and the tools and techniques for providing sufficient privacy protections, so that personally identifiable information (PII) is safe from exposure and exploitation. Data anonymization should be considered a continuous process; one that can require rapid iteration of applying various privacy engineering techniques and then measuring those privacy outcomes until a desired end state is reached. In the following sections, we'll dive deeper into our core tenets of the data anonymization process, and then walkthrough how you might apply them to a notional dataset.
MLOps is an abbreviation for Machine Learning Operations and a basic component of Machine Learning engineering that focuses on optimising the process of deploying machine learning models and subsequently maintaining and monitoring them. MLOps is a collaborative function that frequently includes data scientists, devops engineers, and IT. MLOps allows for the automated testing of machine learning artefacts (e.g. Anish has been a Lead Data Science consultant for various Fortune 500 customers for a long time and has helped over 2000 employees into the Data Science profession. He is an MSc in Data Science and a technical writer for the top Data Science magazines.