NEW DELHI: From diagnosing diseases to categorising huskies, Artificial Intelligence has countless uses but mistrust in the technology and its solutions will persist until people, the "end users", can fully understand all its processes, says a US-based Indian scientist. Overcoming the "lack of transparency" in the way AI processes information - popularly called the "black box problem" - is crucial for people to develop trust in the technology, said Sambit Bhattacharya who teaches Computer Science at the Fayetteville State University "Trust is a major issue with Artificial Intelligence because people are the end-users, and they can never have full trust in it if they do not know how AI processes information," Bhattacharya told . The computer scientist, whose work includes using machine learning (ML) and AI to process images, was a keynote speaker at the recent 4th International and 19th National Conference on Machines and Mechanisms (iNaCoMM 2019) at the Indian Institute of Technology in Mandi. To buttress his point that users don't always trust solutions provided by AI, Bhattacharya cited the instance of researchers at Mount Sinai Hospital in the US who applied ML to a large database of patient records containing information such as test results and doctor visits. The'Deep Patient' software they used had exceptional accuracy in predicting disease, discovering patterns hidden in the hospital data indicating when patients were on the way to different ailments, including cancer, according to a 2016 study published in the journal Nature.
In a computer science lab in Dublin City University (DCU) students are busy at work. Take a closer look and you will realise they are creating deepfakes. This is not a secret project and they're not afraid of getting caught by their lecturer because it is, in fact, a course assignment. Deepfakes of comedian Bill Hader morphing into Tom Cruise and Al Pacino or'Mark Zuckerberg' boasting about how Facebook owns its users, demonstrate how easy it is to use machine learning techniques to create realistic fake footage of people doing and saying things they never have. The technology is getting better and telling deepfakes from genuine footage is becoming increasingly difficult.
Machine learning is everywhere, but is it actual intelligence? A computer scientist wrestles with the ethical questions demanded by the rise of AI. Published by Farrar, Straus and Giroux October 15th 2019. The idea is that unchecked robots will rise up and kill us all. But such martial bodings overlook a perhaps more threatening model: Aladdin.
An expert on artificial intelligence has called for all algorithms that make life-changing decisions – in areas from job applications to immigration into the UK – to be halted immediately. Prof Noel Sharkey, who is also a leading figure in a global campaign against "killer robots", said algorithms were so "infected with biases" that their decision-making processes could not be fair or trusted. A moratorium must be imposed on all "life-changing decision-making algorithms" in Britain, he said. Sharkey has suggested testing AI decision-making machines in the same way as new pharmaceutical drugs are vigorously checked before they are allowed on to the market. In an interview with the Guardian, the Sheffield University robotics/AI pioneer said he was deeply concerned over a series of examples of machine-learning systems being loaded with bias.
Alibaba Cloud (Alibaba) has released the source code its Alink machine learning platform on GitHub. Developed by Alibaba, Alink offers a broad range of algorithm libraries that support both batch and stream processing, vital for machine learning tasks such as online product recommendation and intelligent customer services. According to Alibaba, Alink was developed based on Flink, a unified distributed computing engine. With seamless unification of batch and stream processing, Alibaba says Alink offers a more effective platform for developers to perform data analytics and machine learning tasks. The platform supports open-source data storage such as Kafka, HDFS and HBase, as well as Alibaba's proprietary data storage format.
Neural networks are trained to exactly fit the data. Such models usually would be considered as over-fitting, and yet they have managed to obtain high accuracy on test data. It is counter-intuitive -- but it works. This has raised many eyebrows, especially regarding the mathematical foundations of machine learning and their relevance to practitioners. In order to address these contradictions, researchers at OpenAI, in their recent work, double down on this widely believed grand illusion of bigger is better.
Cleveland Clinic is a non-profit academic medical center. Advertising on our site helps support our mission. For over a century, malignant brain tumors such as glioblastoma (GBM) have carried a dismal prognosis. The most recent substantial advance has been provided by surgical resection and chemoradiation followed by adjuvant temozolomide therapy. Yet a problem during the requisite post-treatment surveillance imaging is that the brain's reaction to heavy doses of radiation can mimic the appearance of true tumor progression on MRI (Figure 1).
The struggle is real, as they say, when it comes to getting machine learning into production. That was one of the big messages of 2019 as enterprises completed successful machine learning pilots but found it much more difficult to put their efforts into production let alone scale them across the whole organization. Even though everyone seems to be working on it, machine learning deployed in production grew at a slower rate between 2018 and 2019, according to Gartner's annual CIO survey. Gartner VP analyst and fellow Rita Sallam is forecasting that enterprises that may have experimented with open source technologies in their pilot efforts will likely turn to commercial artificial intelligence and machine learning platforms to pull together those open source efforts into their enterprise deployment efforts. What's more, enterprises are likely to turn to the AI and ML platforms offered by public cloud providers such as Amazon AWS, Google, and Microsoft Azure.
American localization specialist Lionbridge Technologies has been employing machine translation tools for many years. Eventually, its customers started asking for multilingual training data. Today, Lionbridge has a separate division entirely dedicated to AI, doing everything from collection of chatbot training data to image annotation, audio transcription and even multilingual content moderation services. To find out more about the work of the division, AI Business talked to Aristotelis Kostopoulos, vice president of product solutions, artificial intelligence at Lionbridge. Q: The AI division at Lionbridge grew out of the machine translation business, but today it does so much more.
Responsible Operations is intended to help chart library community engagement with data science, machine learning, and artificial intelligence (AI) and was developed in partnership with an advisory group and a landscape group comprised of more than 70 librarians and professionals from universities, libraries, museums, archives, and other organizations. This research agenda presents an interdependent set of technical, organizational, and social challenges to be addressed en route to library operationalization of data science, machine learning, and AI. Organizations can use Responsible Operations to make a case for addressing challenges, and the recommendations provide an excellent starting place for discussion and action.