hallucinating
Elon Musk's xAI Might Be Hallucinating Its Chances Against ChatGPT
In April, Elon Musk told right-wing commentator Tucker Carlson that he was starting a project to compete with ChatGPT and build "a maximum truth-seeking AI that tries to understand the nature of the universe." Today, Musk unveiled that new artificial intelligence venture. The company's spare landing page repeats that goal of understanding the universe and lists 11 AI researchers--seemingly all men--who have made significant contributions to the field of AI in recent years and worked at companies including Google, DeepMind, and OpenAI. The crew is an "all-star founding team," according to Linxi "Jim" Fan, an AI researcher at Nvidia. "I'm really impressed by the talent density--read too many papers by them to count," he writes in a LinkedIn post. This content can also be viewed on the site it originates from.
Google Vice President Warns That AI Chatbots Are Hallucinating
Speaking to German newspaper Welt am Sonntag, Google vice president Prabhakar Raghavan warned that users may be delivered complete nonsense by chatbots, despite answers seeming coherent. Google is set to launch its own rival to OpenAI's ChatGPT, a language model that can answer your questions and queries. Named Bard, the chatbot will roll out to the public in the coming weeks according to Google CEO Sundar Pichai. Ahead of the launch, Google demonstrated the powers of Bard in a promo video. Unfortunately, people noticed that the chatbot – a scaled-down version of their Language Model for Dialogue Applications (LaMDA) which convinced one engineer it was sentient – came up with incorrect statements about the JWST.
Why Meta Took Down its 'Hallucinating' AI Model Galactica?
On Wednesday, MetaAI and Papers with Code announced the release of Galactica, an open-source large language model trained on scientific knowledge, with 120 billion parameters. However, just days after its launch, Meta took Galactica down. Interestingly, every result generated by Galactica came with the warning- Outputs may be unreliable. Language Models are prone to hallucinate text. "Galactica is trained on a large and curated corpus of humanity's scientific knowledge. This includes over 48 million papers, textbooks and lecture notes, millions of compounds and proteins, scientific websites, encyclopedias and more," the paper said.
- Instructional Material > Course Syllabus & Notes (0.47)
- Research Report (0.36)
- Information Technology > Services (0.76)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.31)
- Health & Medicine > Therapeutic Area > Immunology (0.31)
Online Evasion Attacks on Recurrent Models:The Power of Hallucinating the Future
Joe, Byunggill, Shin, Insik, Hamm, Jihun
Recurrent models are frequently being used in online tasks such as autonomous driving, and a comprehensive study of their vulnerability is called for. Existing research is limited in generality only addressing application-specific vulnerability or making implausible assumptions such as the knowledge of future input. In this paper, we present a general attack framework for online tasks incorporating the unique constraints of the online setting different from offline tasks. Our framework is versatile in that it covers time-varying adversarial objectives and various optimization constraints, allowing for a comprehensive study of robustness. Using the framework, we also present a novel white-box attack called Predictive Attack that `hallucinates' the future. The attack achieves 98 percent of the performance of the ideal but infeasible clairvoyant attack on average. We validate the effectiveness of the proposed framework and attacks through various experiments.
- North America > United States > Louisiana (0.04)
- Asia > South Korea > Daejeon > Daejeon (0.04)
- Asia > Middle East > Jordan (0.04)
- Asia > India > Karnataka > Bengaluru (0.04)
- Transportation > Ground > Road (0.35)
- Information Technology > Robotics & Automation (0.35)
Hallucinating To Better Text Translation - AI Summary
We don't start off reading raw text, which requires fundamental knowledge and understanding about the world, as well as the advanced ability to interpret and infer descriptions and relationships. Rather, humans begin our language journey slowly, by pointing and interacting with our environment, basing our words and perceiving their meaning through the context of the physical and social world. With recent, significant advances in deep learning, "there's been an interesting development in how one might use non-text information -- for example, images, audio, or other grounding information -- to tackle practical tasks involving language" says Kim, because "when humans are performing language processing tasks, we're doing so within a grounded, situated world." The pairing of hallucinated images and text during inference, the team postulated, imitates that process, providing context for improved performance over current state-of-the-art techniques, which utilize text-only data. To do this, the team used an encoder-decoder architecture with two transformers, a type of neural network model that's suited for sequence-dependent data, like language, that can pay attention key words and semantics of a sentence. Moreover, Kim and Panda note, a technique like VALHALLA is still a black box, with the assumption that hallucinated images are providing helpful information, and the team plans to investigate what and how the model is learning in order to validate their methods.
Hallucinating to better text translation
As babies, we babble and imitate our way to learning languages. We don't start off reading raw text, which requires fundamental knowledge and understanding about the world, as well as the advanced ability to interpret and infer descriptions and relationships. Rather, humans begin our language journey slowly, by pointing and interacting with our environment, basing our words and perceiving their meaning through the context of the physical and social world. Eventually, we can craft full sentences to communicate complex ideas. Similarly, when humans begin learning and translating into another language, the incorporation of other sensory information, like multimedia, paired with the new and unfamiliar words, like flashcards with images, improves language acquisition and retention. Then, with enough practice, humans can accurately translate new, unseen sentences in context without the accompanying media; however, imagining a picture based on the original text helps.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.40)
- North America > United States > California > San Diego County > San Diego (0.05)
'Hallucinating' AI makes it harder than ever to hide from surveillance ZDNet
Surveillance video is everywhere these days, and researchers are working on making it smarter and smarter. The latest advance is in the problem of constructing -- or "hallucinating" in machine learning ML parlance -- a complete image of a person from a partial or occluded photo. Occlusion occurs when the object, or body, you want to see is partially covered by an intervening object or body. In a crowded public area, say Times Square in New York, surveillance cameras would rarely get an unobstructed view of a person of interest. What is artificial general intelligence? That's where the paper Can Adversarial Networks Hallucinate Occluded People With a Plausible Aspect? by researchers from the University of Modena comes in.