Goto

Collaborating Authors

Questionnaire & Opinion Survey


AI early adopters in the public sector

#artificialintelligence

As one of the hottest technologies of recent years, artificial intelligence (AI) has started penetrating both the US public and the private sectors--though to differing degrees. While the private sector seems bullish on AI, the public sector's approach appears tempered with more caution--a Deloitte survey of select early adopters of AI shows high concern around the potential risks of AI among public sector organizations (see the sidebar "About the survey"). They give a peek into how public sector organizations are approaching AI; and how the approaches, in many cases, differ from those of their private sector counterparts. AI is not completely new to the public sector. The first AI contract was awarded in 1985 by the US Social Security Administration,1 but the technology still wasn't advanced enough to become common in the following decades.


AI can detect how lonely you are by analysing your speech

#artificialintelligence

Artificial intelligence (AI) can detect loneliness with 94 per cent accuracy from a person's speech, a new scientific paper reports. Researchers in the US used several AI tools, including IBM Watson, to analyse transcripts of older adults interviewed about feelings of loneliness. By analysing words, phrases, and gaps of silence during the interviews, the AI assessed loneliness symptoms nearly as accurately as loneliness questionnaires completed by the participants themselves, which can be biased. It revealed that lonely individuals tend to have longer responses to direct questions about loneliness, and express more sadness in their answers. 'Most studies use either a direct question of "how often do you feel lonely", which can lead to biased responses due to stigma associated with loneliness,' said senior author Ellen Lee at UC San Diego (UCSD) School of Medicine.


AI can detect how lonely you are by analysing your speech

Daily Mail - Science & tech

Artificial intelligence (AI) can detect loneliness with 94 per cent accuracy from a person's speech, a new scientific paper reports. Researchers in the US used several AI tools, including IBM Watson, to analyse transcripts of older adults interviewed about feelings of loneliness. By analysing words, phrases, and gaps of silence during the interviews, the AI assessed loneliness symptoms nearly as accurately as loneliness questionnaires completed by the participants themselves, which can be biased. It revealed that lonely individuals tend to have longer responses to direct questions about loneliness, and express more sadness in their answers. 'Most studies use either a direct question of "how often do you feel lonely", which can lead to biased responses due to stigma associated with loneliness,' said senior author Ellen Lee at UC San Diego (UCSD) School of Medicine.


NLP in the Cloud Is Growing, But Obstacles Remain

#artificialintelligence

More than three-quarters of natural language processing (NLP) users utilize a cloud NLP service, according to the 2020 NLP Industry Survey. While cloud NLP workloads are on the rise, there are barriers to using the technology in the cloud, says Ben Lorica, one of the authors of the study. Overall, this is a great time to be using NLP technology to process and analyze text, Lorica and Paco Nathan write in the 2020 NLP Industry Survey, which was sponsored by John Snow Labs, developer of the open source Spark NLP library that's used in the healthcare field. For starters, the budgets for NLP use cases are expanding quite a bit. The capabilities, accuracy, and scalability of NLP models and services, most of which at this point are based on neural networks, have also gone up, says Lorica, who is the principal of Gradient Flow Research (which conducted the survey) and also the chair of the upcoming NLP Summit.


Major survey highlights Europeans' fears over AI – Government & civil service news

#artificialintelligence

Less than 20% of Europeans believe that current laws "efficiently regulate" artificial intelligence, and 56% have low trust in authorities to exert effective control over the technology, according to a new survey from the European Consumer Organisation (BEUC). The findings have important implications for the governance and design of AI-powered public services, emphasising the need to address citizens' fears over transparency, accountability, equity in decision-making, and the management of personal data. The BEUC surveyed 11,500 consumers in nine European countries: Belgium, Denmark, France, Germany, Italy, Poland, Portugal, Spain and Sweden. It found that while a large majority of respondents feel that artificial intelligence (AI) can be useful, most don't trust the technology and feel that current regulations do not protect them from the harms it can cause. It also found that 66% of respondents from Belgium, Italy, Portugal and Spain agree that AI can be hazardous and should be banned by authorities.


A Guide to Your Future Data Scientist Salary - Dataconomy

#artificialintelligence

Seeing that the data analytics industry is young, its not surprising to see professionals more active in moving employers often as anyone with experience becomes far more valuable to companies and can reach manager status quickly. Across all industries in Europe, 68% of people do not currently work in start-ups. Despite this, the data science market remains open-minded, with 83% saying they would consider joining one in the future. The respondents that currently work in start-up industries are primarily Technology/IT and Consulting (37%). Respondents from these two industries are also respectively the majority that would consider working for a start-up in the future (38%).


Six Steps to Bridge the Responsible AI Gap

#artificialintelligence

As artificial intelligence assumes a more central role in countless aspects of business and society, so has the need for ensuring its responsible use. AI has dramatically improved financial performance, employee experience, and product and service quality for millions of customers and citizens, but it has also inflicted harm. AI systems have offered lower credit card limits to women than men despite similar financial profiles. Digital ads have demonstrated racial bias in housing and mortgage offers. Users have tricked chatbots into making offensive and racist comments.


Big bad data: We don't trust AI to make good decisions

#artificialintelligence

The UK government's recent technological mishaps has seemingly left a bitter taste in the mouth of many British citizens. A new report from the British Computer Society (BCS), the Chartered Institute for IT, has now revealed that more than half of UK adults (53%) don't trust organisations that use algorithms to make decisions about them. The survey, conducted with more than 2,000 respondents, comes in the wake of a tumultuous summer, shaken by student uproar after it emerged that the exam regulator Ofqual used an unfair algorithm to predict A-level and GCSE results, after the COVID-19 pandemic prevented exams from taking place. Ofqual's algorithm effectively based predictions on schools' previous performances, leading to significant downgrades in results that particularly affected state schools, while favoring private schools. The government promptly backtracked and allowed students to adopt teacher-predicted grades rather than algorithm-based results.


Big bad data: We don't trust AI to make good decisions

ZDNet

The UK government's recent technological mishaps has seemingly left a bitter taste in the mouth of many British citizens. A new report from the British Computer Society (BCS), the Chartered Institute for IT, has now revealed that more than half of UK adults (53%) don't trust organisations that use algorithms to make decisions about them. The survey, conducted over 2,000 respondents, comes in the wake of a tumultuous summer, shaken by student uproar after it emerged that the exam regulator Ofqual used an unfair algorithm to predict A-level and GCSE results, after the COVID-19 pandemic prevented exams from taking place. Ofqual's algorithm effectively based predictions on schools' previous performances, leading to significant downgrades in results that particularly affected state schools, while favoring private schools. The government promptly backtracked and allowed students to adopt teacher-predicted grades rather than algorithm-based results.


The imperatives for automation success

#artificialintelligence

At a time when companies are increasingly embracing technologies such as robotic process automation, natural language processing, and artificial intelligence, and as companies' automation efforts mature, findings from our second McKinsey Global Survey on the topic show that the imperatives for automation success are shifting. The online survey was in the field from February 4 to February 14, 2020, and garnered responses from 1,179 participants representing the full range of regions, industries, company sizes, functional specialties, and tenures. To adjust for differences in response rates, the data are weighted by the contribution of each respondent's nation to global GDP. Two years ago our survey found that making business-process automation a strategic priority was conducive to success beyond the piloting stage. 2 2. We define business-process automation as the use of general-purpose technologies (for example, bots and algorithms) to perform work that was previously done manually, in order to improve the functionality of a company's underlying systems. In the survey, automation did not include the use of automation that was custom built (for example, Excel macros and custom scripts) for organizations.