Artificial Intelligence is considered one of the most revolutionary developments in the history of technology. Within a few years, the world has already witnessed the transformative capabilities of this tech. Not to our surprise, AI is already driving several innovations and powering some of the most cutting-edge everyday solutions. Already, a captivating conversation is taking place about the future of artificial intelligence and what it will/should mean for humanity. There are stirring controversies where the world's leading experts disagree such as AI's future impact on the job market; what will happen if human-level AI will be developed, will it lead to an intelligence explosion, and whether we should welcome or fear this advancement.
Opening up the "black box" helps remove uncertainties about AI outcomes, providing insight into the modeling process and identifying biases and errors. Artificial intelligence (AI) is being used more frequently in our daily lives, with systems such as Siri and Alexa becoming commonplace in many households. Many households themselves are "smart," powered by devices that can control your lights, heating and air, and even the music playing. And those music players are powered by AI that recommends songs and artists you may like. However, these systems are often referred to as "black box" systems because we do not know how the data is processed--how do the users know why the model has made that prediction?
The field of Artificial Intelligence (AI) is no stranger to prophesy. At the Ford Distinguished Lectures in 1960, the economist Herbert Simon declared that within 20 years machines would be capable of performing any task achievable by humans. In 1961, Claude Shannon -- the founder of information theory -- predicted that science fiction style robots would emerge within 15 years. Good conceived of a runaway "intelligence explosion," a process whereby smarter-than-human machines iteratively improve their own intelligence. Writing in 1965, Good predicted that the explosion would arrive before the end of the twentieth century.
The European Commission's (EC) proposed Artificial Intelligence (AI) regulation – a much-awaited piece of legislation – is out. While this text must still go through consultations within the EU before its adoption, the proposal already provides a good sense of how the EU considers the development of AI within the years to come: by following a risk-based approach to regulation. Other use-cases such as FRT for authentication processes are not part of the list of high-level risks and thus should require a lighter level of regulation. While technology providers have to maintain the highest level of performance and accuracy of their systems, this necessary step isn't the most critical to prevent harm. The EC doesn't detail any threshold of accuracy to meet, but rather requires a robust and documented risk-mitigation process designed to prevent harm.
Today, AI is getting adopted in everyday life and now it is more important to ensure that decisions that have been taken using AI are not reflecting discriminatory behavior towards a set of populations. It is important to take fairness into consideration while consuming the output from AI. A quote from "The Guardian" has summarized it very well – "Although neural networks might be said to write their own programs, they do so towards goals set by humans, using data collected for human purposes. If the data is skewed, even by accident, the computers will amplify injustice." Discrimination towards a sub-population can be created unintentionally and unknowingly but while the deployment of any AI solution, a check on bias is imperative.
AI models are as good as the algorithms and data they are trained on. When an AI system fails, it is usually due to three factors; 1) the algorithm has been incorrectly trained, 2) there is bias in the system's training data, or 3) there is developer bias in the model building process. The focus of this article is on the bias in training data and the bias that is coded directly into AI systems by model developers. "I think today, the AI community at large has a self-selecting bias simply because the people who are building such systems are still largely white, young and male. I think there is a recognition that we need to get beyond it, but the reality is that we haven't necessarily done so yet."
The Association for the Advancement of Artificial Intelligence's 2021 Spring Symposium Series was held virtually from March 22-24, 2021. There were ten symposia in the program: Applied AI in Healthcare: Safety, Community, and the Environment, Artificial Intelligence for K-12 Education, Artificial Intelligence for Synthetic Biology, Challenges and Opportunities for Multi-Agent Reinforcement Learning, Combining Machine Learning and Knowledge Engineering, Combining Machine Learning with Physical Sciences, Implementing AI Ethics, Leveraging Systems Engineering to Realize Synergistic AI/Machine-Learning Capabilities, Machine Learning for Mobile Robot Navigation in the Wild, and Survival Prediction: Algorithms, Challenges and Applications. This report contains summaries of all the symposia. The two-day international virtual symposium included invited speakers, presenters of research papers, and breakout discussions from attendees around the world. Registrants were from different countries/cities including the US, Canada, Melbourne, Paris, Berlin, Lisbon, Beijing, Central America, Amsterdam, and Switzerland. We had active discussions about solving health-related, real-world issues in various emerging, ongoing, and underrepresented areas using innovative technologies including Artificial Intelligence and Robotics. We primarily focused on AI-assisted and robot-assisted healthcare, with specific focus on areas of improving safety, the community, and the environment through the latest technological advances in our respective fields. The day was kicked off by Raj Puri, Physician and Director of Strategic Health Initiatives & Innovation at Stanford University spoke about a novel, automated sentinel surveillance system his team built mitigating COVID and its integration into their public-facing dashboard of clinical data and metrics. Selected paper presentations during both days were wide ranging including talks from Oliver Bendel, a Professor from Switzerland and his Swiss colleague, Alina Gasser discussing co-robots in care and support, providing the latest information on technologies relating to human-robot interaction and communication. Yizheng Zhao, Associate Professor at Nanjing University and her colleagues from China discussed views of ontologies with applications to logical difference computation in the healthcare sector. Pooria Ghadiri from McGill University, Montreal, Canada discussed his research relating to AI enhancements in health-care delivery for adolescents with mental health problems in the primary care setting.
AI can transform sourcing and screening from investors' pack mentality, to funding more female founders who build better products and services -- and create higher returns for investors. Venture capitalists know that their advantage lies in identifying the most promising opportunities before their competitors do. This is confirmed by a University of Chicago study by Morten Sorensen, which shows that investors create 60% of their value from the upper part of the funnel, specifically from sourcing and screening. In which case, sourcing and screening must be a constant target for improvement, right? No -- apart from a few VCs who have reinforced their sourcing with web crawlers, sourcing and screening practices have remained the same since the inception of the VC asset class around 1940.
Every industry is being transformed by Artificial Intelligence owing to its sophisticated capabilities and thorough data analysis. AI may help organizations in a variety of ways. Because AI is a larger technology, its commercial benefits are limitless. AI is capable of controlling corporate process automation as well as accumulating data analysis findings. Many global corporations are leveraging AI to improve employee and customer engagement. This article will discuss how businesses will use AI in the future year.
The philosopher Mark Coeckelbergh has long been dealing with the development of intelligent machines and their effects on concepts of humanity, societal transformation and the ideology of the trans- and posthuman. His recent book AI Ethics (MIT Press, 2020) provides a survey of the most pressing moral questions opened up by these developments. Should we simply enjoy the new liberties generated by AI as future offers without any alternative? Where does selflessness end with respect to the machinic "other," and where should deliberations about a "trustworthy" AI start? Questions like these are tackled by Coeckelbergh in the following interview.