Artificial intelligence (AI) has been making a lot of headlines lately, and with good reason. AI is quickly becoming more and more sophisticated, allowing it to be used in a variety of ways that can benefit both businesses and individuals alike. But while the potential benefits are clear, there's also some concern about how these powerful technologies may be misused or abused by those who don't have our best interests at heart. As such, many governments around the world are looking into various forms of regulation for AI technology as they seek to protect their citizens from any potentially negative consequences that could arise from its misuse. However, despite this increased focus on regulating AI technology for safety purposes – one thing remains unclear: How effective will this form of regulation really end up being?
Remember when loads of academics were confidently predicting that technology, from robots to AI, was about to destroy all our jobs? We went into Covid with record employment before the pandemic, not the robots, knocked a chunk of people out of the workforce. In fact, technology has done something almost worse: giving academics a whole new job producing studies showing how easily technology affects us even on important judgments, from hiring to court cases. Two came across my desk last week highlighting the danger. The first paper turns the tables on the trend for job applicants to be screened by algorithms.
The Department of Mechanical Engineering at the University of Maryland, College Park invites applications for exceptionally qualified candidates to apply for tenure-track faculty positions, with a target start date of August 2023 or later. Priority will be given to candidates with expertise in the Design and Industrial AI area. Exceptional candidates with expertise outside these areas are also welcome to apply. Qualifications: Candidates for the rank of Assistant Professor should have received or expect to receive their PhD in Mechanical Engineering or a related discipline prior to employment. Additionally, candidates should be creative and adaptable, and have a high potential for both research and teaching.
MILAN/LONDON Feb 3 (Reuters) - Italy's Data Protection Agency said on Friday it was prohibiting artificial intelligence (AI) chatbot company Replika from using the personal data of Italian users, citing risks to minors and emotionally fragile people. Replika, a San Francisco startup launched in 2017, offers users customized avatars that talk and listen to them. It has led the way among English speakers, and is free to use, though it brings in around $2 million in monthly revenue from selling bonus features such as voice chats. The'virtual friend' is marketed as being able to improve the emotional well-being of the user. But the Italian watchdog said that by intervening in the user's mood, it "may increase the risks for individuals still in a developmental stage or in a state of emotional fragility".
Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. Italy's Data Protection Agency said on Friday it was prohibiting artificial intelligence (AI) chatbot company Replika from using the personal data of Italian users, citing risks to minors and emotionally fragile people. Replika, a San Francisco startup launched in 2017, offers users customized avatars that talk and listen to them. It has led the way among English speakers, and is free to use, though it brings in around $2 million in monthly revenue from selling bonus features such as voice chats.
Artists, illustrators and photographers have often led the way in embracing new technology. The concerns that creators such as Harry Woodgate have about AI programs ('It's the opposite of art': why illustrators are furious about AI, 23 January) that "rely entirely on the pirated intellectual property of countless working artists, photographers, illustrators and other rights holders" must be heeded. The UK's £116bn cultural and creative industries have an opportunity to be world leaders in developing and sustaining talent in emerging technologies, but the government must ensure that artists' rights are protected.
An artificial intelligence tool called ChatGPT averaged a C-plus on exams at the University of Minnesota Law School, according to four law professors who gave it a try. The law professors used ChatGPT to answer the questions and then blindly graded the answers, along with answers by real students, report Reuters and Insider. The average C-plus grade was still below that of law students, who had a B-plus average. And ChatGPT's performance, while earning passing grades, was at or near the bottom of the class. The professors' findings are available here.
Technology has a key part to play in the solutions for building a better global economy. And, tech companies themselves are moving rapidly to become better citizens and change agents. Here's what we can learn. Everyone knows basic ways to have a gentler environmental impact: recycle plastic and aluminum, walk or take public transport if you can, turn off the lights when you leave a room, unsubscribe to junk snail mail and more. Sure, you can reduce, reuse, and recycle all you want, but when it comes to our tech devices, we could do a lot to be more sustainable.
Artificial intelligence (AI) is a hot topic these days, but it's not a perfect technology. AI is like almost anything else in that it has both advantages and downsides. What are the pros and cons of artificial intelligence? Here's what people bring up most often. For example, an AI tool might automatically recognize an incoming email as an invoice and send it to the proper person or department.