PLEASE NOTE: that RSVPing to this page DOES NOT GRANT YOU ACCESS to this meetup, Spaces are limited! DESCRIPTION How do we design Ai systems that we trust? Algorithmic Bias, Algorithmic Transparency, Technological Unemployment, Data Privacy & Algorithmic Misinformation (fake news) are just some of the issues facing the fair and ethical use of Machine Learning. In collaboration with Microsoft for this DSAi special edition Ethics & Interpretability event - come along to learn from industry leaders how issues such as Algorithmic Bias might affect you & what is being done to address the ethical use of Machine Learning in 2019. 'Ethics for Artificial Intelligence' In this 20 minute presentation, Aurelie will provide a formal introduction as to what ethical and responsible AI is.
Facial recognition is becoming more pervasive in consumer products and law enforcement, backed by increasingly powerful machine-learning technology. But a test of commercial facial-analysis services from IBM and Microsoft raises concerns that the systems scrutinizing our features are significantly less accurate for people with black skin.
As artificial intelligence and machine learning become the new industry norm, tech giants and service providers across the world are riding the emerging tech wave. However, with its applicability to enhance services, AI and machine learning have become ubiquitous for any technological advancements. As the world acknowledges the inevitability of AI and ML, the conversation has, however, shifted to its ethics, with governments and lawmakers bringing out stringent policies regarding the applicability of the technology. In Europe, countries like the UK and France have put ethics at the core of AI, while laying out stronger compliance rules for tech giants to adhere to. Taking note of the latest developments, tech giants like Google and Facebook, among many other companies, have brought out their ethical policies regarding deployment of AI and ML within their organisations.
Google CEO Sundar Pichai brought good tidings to investors on parent company Alphabet's earnings call last week. Alphabet reported $39.3 billion in revenue last quarter, up 22 percent from a year earlier. Pichai gave some of the credit to Google's machine learning technology, saying it had figured out how to match ads more closely to what consumers wanted. One thing Pichai didn't mention: Alphabet is now cautioning investors that the same AI technology could create ethical and legal troubles for the company's business. The warning appeared for the first time in the "Risk Factors" segment of Alphabet's latest annual report, filed with the Securities and Exchange Commission the following day: "[N]ew products and services, including those that incorporate or utilize artificial intelligence and machine learning, can raise new or exacerbate existing ethical, technological, legal, and other challenges, which may negatively affect our brands and demand for our products and services and adversely affect our revenues and operating results."