If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
If you'll permit us to spoil a little bit of movie magic, many of the sound effects you hear in film and TV are actually recreated and edited in later by Foley artists. Now, researchers are attempting to create sound effect-generating artificial intelligence to see if they can do their jobs well enough to fool the general population. In a recent study, a small cohort of participants fell for the trick: Most they believed that the AI-generated noises were real, IEEE Spectrum reports. Sometimes, they even chose the AI version over a video's original audio. In the study, which was published in June in the paper IEEE Transactions on Multimedia, 41 of the 53 participants were fooled by the AI-generated sounds.
The Stanford Center for Health Education has launched an online program on Artificial Intelligence in Healthcare. Designed for technology professionals, computer scientists, and healthcare providers, the program aims to advance the delivery of patient care and improve global health outcomes through artificial intelligence and machine learning. The online program will be taught by faculty from Stanford Medicine. The program's goal is to foster a common understanding of the potential for AI to safely and ethically improve patient care. "Effective use of AI in healthcare requires knowing more than just the algorithms and how they work," says Nigam Shah, associate professor of medicine and biomedical data science, the faculty director of the new program.
IBM says it has made progress toward developing ways to estimate the severity of Parkinson's symptoms by analyzing physical activity as motor impairment increases. In a paper published in the journal Nature Scientific Reports, scientists at IBM Research, Pfizer, the Spivack Center for Clinical and Translational Neuroscience, and Tufts created statistical representations of patients' movement that could be evaluated using AI either in-clinic or from a more natural setting, such as a patient's home. And at the 2020 Machine Learning for Healthcare Conference (MLHC), IBM and the Michael J. Fox Foundation intend to detail a disease progression model that pinpoints how far a person's Parkinson's has advanced. The human motor system relies on a series of discrete movements, like arm swinging while walking, running, or jogging, to perform tasks. These movements and the transitions linking them create patterns of activity that can be measured and analyzed for signs of Parkinson's, a disease that's anticipated to affect nearly 1 million people in the U.S. this year alone.
AI could also have a transformative effect on clinical decision-making through the utilisation of the huge levels of genomic, biomarker, phenotype, behavioural, biographical and clinical data that is generated across the health system. Bayer and Merck & Co provide a perfect example of this. They have developed an AI software system to support clinical decision-making of chronic thromboembolic pulmonary hypertension (CTEPH) – a rare form of pulmonary hypertension. The software helps differentiate patients from those suffering with similar symptoms that are actually a result of asthma and chronic obstructive pulmonary disease (COPD), and therefore diagnose CTEPH more reliably and efficiently. The CTEPH Pattern Recognition Artificial Intelligence obtained FDA Breakthrough Device Designation in December 2018.
I work for Icon Solutions. We work in instant payments. What I want to talk about is applying machine learning to fraud detection. When we first started researching it, we found two themes that were going on. We found these hype type things. I'm sure you've all seen this, when will we bow to our machine overlord? By 2025, robots will be playing symphonies and all that stuff. Then we found the other extreme as well, which was the fairly wacky math. What we were looking at is how can we actually apply this technology to our requirements and to those of our clients? I'm going to talk about payments. Then I'm going to do a demonstration. In terms of payments, the way it worked, if you wanted to interact with the bank through most of 20th centuries, you had to go into a branch. That was the only way you could interact with the bank. If somebody wanted to steal money from a bank, they had to rob it. That was basically the only option they had, which is why you can see the big security barriers that they had in the branches at that point in time. Then, moving on to about 1960s, the bank started employing new technologies. They took things like the IBM 360 series, and they actually started using it. Even then it was pretty secure. The people who were using it were people who worked for the bank. It was a closed network. If you wanted to actually get into the systems, you had to go into the bank's offices, and you had to be an employee. The potential for fraud was fairly small.
The creation of the Global Partnership on Artificial Intelligence (GPAI) reflects the growing interest of states in AI technologies. The initiative, which brings together 14 countries and the European Union, will help participants establish practical cooperation and formulate common approaches to the development and implementation of AI. At the same time, it is a symptom of the growing technological rivalry in the world, primarily between the United States and China. Russia's ability to interact with the GPAI may be limited for political reasons, but, from a practical point of view, cooperation would help the country implement its national AI strategy. The Global Partnership on Artificial Intelligence (GPAI) was officially launched on June 15, 2020, at the initiative of the G7 countries alongside Australia, India, Mexico, New Zealand, South Korea, Singapore, Slovenia and the European Union. According to the Joint Statement from the Founding Members, the GPAI is an "international and multistakeholder initiative to guide the responsible development and use of AI, grounded in human rights, inclusion, diversity, innovation, and economic growth."
From targeted phishing campaigns to new stalking methods: there are plenty of ways that artificial intelligence could be used to cause harm if it fell into the wrong hands. A team of researchers decided to rank the potential criminal applications that AI will have in the next 15 years, starting with those we should worry the most about. By using fake audio and video to impersonate another person, the technology can cause various types of harms, said the researchers. The threats range from discrediting public figures to influence public opinion, to extorting funds by impersonating someone's child or relatives over a video call. The ranking was put together after scientists from University College London (UCL) compiled a list of 20 AI-enabled crimes based on academic papers, news and popular culture, and got a few dozen experts to discuss the severity of each threat during a two-day seminar.
Voluntary employee turnover can have a direct financial impact on organisations. And, at the time of this pandemic outbreak where the majority of the organisations are looking to cut down their employee costs, voluntary employee turnover can create a big concern for companies. And thus, the ability to predict this turnover rate of employees can not only help in making informed hiring decisions but can also help in saving a substantial financial crisis in this uncertain time. Acknowledging that, researchers and data scientists from PredictiveHire, a AI recruiting startup, built a language model that can analyse the open-ended interview questions of the candidate to infer the likelihood of a candidate's job-hopping. The study -- led by Madhura Jayaratne, Buddhi Jayatilleke -- was done on the responses of 45,000 job applicants, who used a chatbot to give an interview and also self-rated themselves on their possibility of hopping jobs.
ITU is conducting a global ITU AI/ML 5G Challenge on the theme "How to apply ITU's ML architecture in 5G networks". If you don't know the difference between AI & ML, this picture from the old blog post may help. The ITU website says: Artificial Intelligence (AI) will be the dominant technology of the future and will impact every corner of society. In particular, AI / ML (machine learning) will shape how communication networks, a lifeline of our society, will be run. Many companies in the ICT sector are exploring how to make best use of AI/ML.
Participants were prompted to review a queue of news stories and share 12 true news for social media users. To engage participants to review news articles and their explanations, users had to select at least one article that represents the news headline for each news story they chose to share. They could always skip to the next news story (as many times as needed) if they were not familiar with the topic. The choice of the sharing task and ability to skip unfamiliar topics (unlike work that assumes participants are familiar with a short curated list of news stories e.g., [horne2019rating, nguyen2018believe]) improves the fake news detection task by allowing participants to interact and examine the AI/XAI assistant rather than focusing on news analysis. Participants also had the chance to flag news stories as fake if they found headlines to be fake; however, these were not counted toward the required number of shared stories needed for task completion.