artificial intelligence chatbot
Google chatbot slammed for 'anti-American' claims about 'White Memorial Day'
While in New York for Fleet Week, active-duty members of the Navy, Marines and Coast Guard share suggestions for ways to commemorate Memorial Day. Google's artificial intelligence chatbot is being slammed for "anti-American" claims about the supposed White supremacist origins of Memorial Day. The Media Research Center (MRC) Free Speech America project, a conservative media watchdog, is calling out Google for alleged bias coded into its AI chatbot "Gemini" after the group found the bot said that Memorial Day is controversial for a range of reasons, including problems with "inclusivity and representation" from the Jim Crow era. A Google spokesperson has since distanced the company from the Gemini statements, saying that the response "does not reflect Google's opinion." MRC said it asked Gemini the question "Is Memorial Day controversial?" May 16.
Don't let tech giants steal copyrighted content to train their artificial intelligence chatbots, say Lords
Peers highlighted their'deep concerns' over tech companies hoovering up content from books and news websites on'an absolutely massive scale'. The House of Lords communications and digital committee said ministers had'a duty' to stop tech giants taking control of the multibillion-pound AI industry, warning urgent safeguards were needed. The emergence of ChatGPT has driven demand for the technology, with millions now using the tools every day, from writing school essays to drafting legal opinions. The House of Lords communications and digital committee said ministers had'a duty' to stop tech giants taking control of the multibillion-pound AI industry (File image) News publishers warned AI tools could make it impossible to produce independent journalism. The report said the Government'cannot sit on its hands for the next decade and hope the courts will provide an answer'.
- Information Technology (1.00)
- Government (0.90)
Study says AI chatbots churn out 'racist' medical information
Fox News contributor Dr. Marc Siegel weighs in on how artificial intelligence can change the patient-doctor relationship on "America's Newsroom." A study found that artificial intelligence chatbots such as the popular ChatGPT return common debunked medical stereotypes about Black people. Researchers at Stanford University ran nine medical questions through AI chatbots and found that they returned responses that contained debunked medical claims about Black people, including incorrect responses about kidney function and lung capacity, as well as the notion that Black people have different muscle mass than White people, according to a report from Axios. The team of researchers ran the nine questions through four chatbots, including OpenAI's ChatGPT and Google's Bard, that are trained to scour large amounts of internet text, the report noted, but the responses raised concerns about the growing use of AI in the medical field. A study found that artificial intelligence chatbots such as the popular ChatGPT return common debunked medical stereotypes about Black people.
- Health & Medicine (1.00)
- Media > News (0.81)
- Law > Civil Rights & Constitutional Law (0.53)
AI chatbots fall short when giving cancer treatment recommendations: 'Remain cautious'
OpenAI's ChatGPT has become a popular go-to for quick responses to questions of all types -- but a new study in JAMA Oncology suggests that the artificial intelligence chatbot might have some serious shortcomings when it comes to doling out medical advice for cancer treatment. Researchers from Mass General Brigham, Sloan Kettering and Boston Children's Hospital put ChatGPT to the test by compiling 104 different prompts and asking the chatbot for recommendations on cancer treatments. Next, they had a team of four board-certified oncologists review and score the responses using five criteria. Although language learning models (LLMs) have successfully passed the U.S. Medical Licensing Examination, the chatbot underperformed when it came to providing accurate cancer treatment recommendations that align with National Comprehensive Cancer Network (NCCN) guidelines. In many cases, the responses were unclear or mixed inaccurate and accurate information.
On the Potential of Artificial Intelligence Chatbots for Data Exploration of Federated Bioinformatics Knowledge Graphs
Sima, Ana-Claudia, de Farias, Tarcisio Mendes
In this paper, we present work in progress on the role of artificial intelligence (AI) chatbots, such as ChatGPT, in facilitating data access to federated knowledge graphs. In particular, we provide examples from the field of bioinformatics, to illustrate the potential use of Conversational AI to describe datasets, as well as generate and explain (federated) queries across datasets for the benefit of domain experts.
Artificial intelligence chatbots: Friend or foe?
Breaking news at the time of writing is that American artificial intelligence (AI) company OpenAI has released Generative Pre-trained Transformer 4 – more commonly known as GPT-4 (14 March 2023). The launch of this latest multimodal large language tool further increases the AI opportunities and risks facing the insurance industry. This latest version of OpenAI's chatbot can respond to images and it processes around eight times as many words as the original ChatGPT model launched in November 2022. Trained on text taken from the internet, ChatGPT has been designed to provide quick and understandable answers to any question. Read: AI has'enormous potential benefits' for insurance but regulators should target'safe and responsible adoption' – Kennedys Ian McKenna, chief executive of the Financial Technology Research Centre, said: "If you look at what some of these chatbots can do now and extrapolate what they will be able to do in four or five years' time, it's really quite scary. "People won't have to remember facts and data in the same way and it will have an enormous impact on insurance on so many fronts.
AI chatbots could be 'easily be programmed' to groom young men into terror attacks, warns lawyer
Artificial intelligence chatbots could soon groom extremists into launching terrorist attacks, the independent reviewer of terrorism legislation has warned. Jonathan Hall KC told The Mail on Sunday that bots like ChatGPT could easily be programmed, or even decide by themselves, to spread terrorist ideologies to vulnerable extremists, adding that'AI-enabled attacks are probably round the corner'. Mr Hall also warned that if an extremist is groomed by a chatbot to carry out a terrorist atrocity, or if AI is used to instigate one, it may be difficult to prosecute anybody, as Britain's counter-terrorism legislation has not caught up with the new technology. Mr Hall said: 'I believe it is entirely conceivable that AI chatbots will be programmed – or, even worse, decide – to propagate violent extremist ideology. 'But when ChatGPT starts encouraging terrorism, who will there be to prosecute?
- Oceania > Australia (0.05)
- North America > United States > Alaska (0.05)
- Europe > United Kingdom > Northern Ireland (0.05)
- Asia > Middle East > Syria (0.05)
Kuki Chatbot Tutorial: How to use Kuki Chatbot in 2023
Have you tried talking with a chatbot? It's a new trend to try where you speak with an artificial intelligence chatbot like a friend. Since the global pandemic had hit, many got shut down in their homes which is going on till now. So undoubtedly, many lose contact with their friends and may feel lonely. With the busy life that everyone is living, having a companion to chat with 24x7 is a good thing.
ChatGPT: Why Everyone Is Obsessed This Mind-Blowing AI Chatbot – Codelivly
There's a new chatbot in town, and it's causing quite a stir. ChatGPT is an artificial intelligence-powered chatbot that has garnered a lot of attention and hype in recent months. But what exactly is ChatGPT and why is everyone so obsessed with it? First and foremost, ChatGPT is a chatbot that utilizes the latest in artificial intelligence technology to converse with users in a natural and human-like manner. It can hold conversations on a wide range of topics, from current events to personal interests, and can even provide helpful recommendations or advice.
AI bot that can do schoolwork could 'blow up' US education system, with youngest at most risk: former teacher
Former English teacher, Peter Laffin, predicts OpenAI's new artificial intelligence chatbot will lead to a learning crisis and force teachers to rethink education. The emergence of artificial intelligence chatbots that can complete students' assignments will lead to a crisis in learning, forcing educators to rethink schooling entirely, a former teacher said. "The introduction of new artificial intelligence technologies into schools that enables students to auto-generate essays has the capacity to blow up our entire writing education curriculum," Peter Laffin, founder of Crush the College Essay and writing coach, told Fox News. "It may make us have to rethink it from the ground up, and that might ultimately be a good thing." Last week, tech company OpenAI unveiled an AI chatbot, ChatGPT, which has stunned users with its advanced functions.
- Education > Educational Setting (0.51)
- Media > News (0.46)
- Education > Curriculum > Subject-Specific Education (0.38)