If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Last week, Twitter shared research showing that the platform's algorithms amplify tweets from right-of-center politicians and news outlets at the expense of left-leaning sources. Rumman Chowdhury, the head of Twitter's machine learning, ethics, transparency, and accountability team, said in an interview with Protocol that while some of the behavior could be user-driven, the reason for the bias isn't entirely clear. "We can see that it is happening. We are not entirely sure why it is happening," Chowdhury said. "When algorithms get put out into the world, what happens when people interact with it -- we can't model for that. We can't model for how individuals or groups of people will use Twitter, what will happen in the world in a way that will impact how people use Twitter."
Scientists from Skoltech, Philips Research, and Goethe University Frankfurt have trained a neural network to detect anomalies in medical images to assist physicians in sifting through countless scans in search of pathologies. Reported in IEEE Access, the new method is adapted to the nature of medical imaging and is more successful in spotting abnormalities than general-purpose solutions. Image anomaly detection is a task that comes up in data analysis in many industries. Medical scans, however, pose a particular challenge. It is way easier for algorithms to find, say, a car with a flat tire or a broken windshield in a series of car pictures than to tell which of the X-rays show early signs of pathology in the lungs, like the onset of COVID-19 pneumonia.
Is it even valid to assume that robots will be evil in the future and would seek to control humans? It is likely that in the future we will see different types of intelligent robots with different allegiances (just like human beings). AI is already being experimented with in many countries and tech companies. Thus, robots with different human groups may fight each other, but there is no chance that all robots will fight all humans. Also, It's not necessary that robots are either gonna fight.
In the near future, we should see the value of AI-generated NFTs to expand beyond generative art into more generic NFT utility categories providing a natural vehicle for leveraging the latest deep learning techniques. An example of this value proposition can be seen in digital artists like Refik Anadol who are already experimenting with cutting edge deep learning methods for the creation of NFTs. Anadol's studio have been a pioneer in using techniques such as GANs, and even dabbling into quantum computing, trained models in hundreds of millions images and audio clips to create astonishing visuals. NFTs have been one of the recent delivery mechanisms explored by Anadol.
The Department of Artificial Intelligence and Human Health mission is to lead the artificial intelligence-driven transformation of health care through innovative research, apply that knowledge to treatment in hospital and clinical settings, and provide personalized care for each patient, which will expand Mount Sinai's impact on human health across the Health System and around the world. This effort will include creating a hub-and-satellite model to make new tools and techniques available to all Mount Sinai physicians and building an infrastructure for high-performance computing and data access to improve Mount Sinai's diagnostic and treatment capabilities. The Department of AI and Human Health is also launching a campaign to recruit talented researchers, scientists, physicians, and students in the field. MSDW data goes back to 2003, covering a variety of EMR and ancillary systems at The Mount Sinai Hospital and expanding to Mount Sinai Queens, and in recent years, Mount Sinai Morningside, Mount Sinai West, and Mount Sinai Brooklyn hospitals. The MSDW team offers a list of data services to access custom data sets, custom data marts, and de-identified data.
Since 2017, when Canada became the first country to adopt a national AI strategy, at least 60 countries have adopted some form of policy for artificial intelligence (AI). The prospect of an estimated boost of 16 percent, or US$13 trillion, to global output by 2030 has led to an unprecedented race to promote AI uptake across industry, consumer markets, and government services. Global corporate investment in AI has reportedly reached US$60 billion in 2020 and is projected to more than double by 2025. At the same time, the work on developing global standards for AI has led to significant developments in various international bodies. These encompass both technical aspects of AI (in standards development organizations (SDOs) such as the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the Institute of Electrical and Electronics Engineers (IEEE) among others) and the ethical and policy dimensions of responsible AI.
AI is expanding in two key areas of human activity and market investment -- health and language. Picking up the conversation from where we left off last week, we discussed AI applications and research in those areas with AI investors and authors of the State of AI 2021 report, Nathan Benaich and Ian Hogarth. After releasing what probably was the most comprehensive report on the State of AI in 2020, Air Street Capital and RAAIS founder Nathan Benaich and AI angel investor and UCL IIPP visiting professor Ian Hogarth are back for more. Last week, we discussed AI's underpinning: Machine learning in production, MLOps, and data-centric AI. This week we elaborate on specific areas of applications, investment, and growth.
TL;DR: We introduce mlforecast, an open source framework from Nixtla that makes the use of machine learning models in time series forecasting tasks fast and easy. It allows you to focus on the model and features instead of implementation details. With mlforecast you can make experiments in an esasier way and it has a built-in backtesting functionality to help you find the best performing model. You can use mlforecast in your own infrastructure or use our fully hosted solution. Just send us a mail to firstname.lastname@example.org
There appears to be a common agreement that ethical concerns are of high importance when it comes to systems equipped with some sort of AI. Demands for ethical AI are declared from all directions. As a response, in recent years, public bodies, governments, and universities have rushed in to provide a set of principles to be considered when AI based systems are designed and used. We have learned, however, that high-level principles do not turn easily into actionable advice for practitioners. Hence, also companies are publishing their own ethical guidelines to guide their AI development.