hacker news
An AI Customer Service Chatbot Made Up a Company Policy--and Created a Mess
On Monday, a developer using the popular AI-powered code editor Cursor noticed something strange: Switching between machines instantly logged them out, breaking a common workflow for programmers who use multiple devices. When the user contacted Cursor support, an agent named "Sam" told them it was expected behavior under a new policy. But no such policy existed, and Sam was a bot. The AI model made the policy up, sparking a wave of complaints and cancellation threats documented on Hacker News and Reddit. This marks the latest instance of AI confabulations (also called "hallucinations") causing potential business damage.
Why ChatGPT Won't Replace Coders Just Yet
Is ChatGPT the beginning of the Star Trek vision: We'll just tell the computer what we want it to do? The short answer is: Not right now, and probably not any time soon. That's because the types of coding problems at which ChatGPT seems to excel are common ones. If you ask it to do something that's been done a ton of times before, then sure, it'll do a very good job. These have been coded a bajillion times before, and they're all online. OpenAI trained its models on all that existing code.
Some Chatbots Ganged Up and Plagiarized Me
This article is from Big Technology, a newsletter by Alex Kantrowitz. Last weekend, a new Substack called the Rationalist lifted analysis and writing directly from my own newsletter on the platform, Big Technology. Its plagiarized post on the "Creator Economy"--which I'd covered only days prior--went viral, hitting the front page of Hacker News and sparking a conversation with more than 80 comments. It would've been a terrific debut for any publication, if it was authentic. What made the case of the Rationalist particularly striking, though, was its author--an avatar by the name of "Petra"--admitted they'd used A.I. tools to produce the story, including those from OpenAI, Jasper, and Hugging Face.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.74)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.74)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.74)
DO YO KNOW? BLOG TITLE OPTIMIZER USES AI, AND HOW WELL DOES IT WORK?
The AI system [Max] utilizes is GPT-3, a language model that works with regular appearing to be human language that is equipped for being changed in various ways. The enhancer takes as information a blog entry title to streamline. OpenAI's pre-prepared GPT-3 motor is utilized to produce six substitute titles. For every one of those substitute titles, a calibrated rendition of GPT-3 is counseled to judge how "great" they depend on custom preparation information. The custom preparation information in sync 3 comes from mass accommodation information from Hacker News, got by means of Google's BigQuery administration.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.31)
Machine Learning: The Great Stagnation - AI Summary
This blog post generated a lot of discussion on Hacker News -- many people have reached out to me giving more examples of the stagnation and more examples of projects avoiding it. Maybe I'll add to this article or maybe I'll write a new one, let's see what happens. In the meantime if you can't wait for me to stop staring at the ceiling and write something new, I'm pretty sure you'll enjoy my e-book robotoverlordmanual.com Academics think of themselves as trailblazers, explorers -- seekers of the truth. Any fundamental discovery involves a significant degree of risk. If an idea is guaranteed to work then it moves from the realm of research to engineering.
Top 10 Data Science Newsletters To Stay Updated Amid Lockdown
With data science and artificial intelligence evolving on a daily basis, the magnitude of information it generates can sometimes be challenging to keep pace with. And that's why all these data science news websites and blogs come with their newsletter that continually churns out relevant and significant information for readers. An excellent form of curated content, newsletters can be extremely informative and insightful for data science professionals, students as well as business leaders. These weekly newsletters provide updated trends of the industry, latest news, different methodologies as well as information on new technologies that can be an exciting learning resource for many. Further, with such a vast amount of information, it is critical for all to stay away from clickbait as well as fake news, and these newsletters can be the perfect rescue for the same.
- North America > United States > New York (0.05)
- Asia > India (0.05)
An AI-written blog highlights bad human judgment on GPT-3
This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. Last week, many tech publications broke news about a blog generated by artificial intelligence that fooled thousands of users and landed on top of the Hacker News forum. GPT-3, the massive language model developed by AI research lab OpenAI, had written the articles. Since its release in July, GPT-3 has caused a lot of excitement in the AI community. Developers who have received early access to the language model have used to do many interesting things, showing just how far AI research has come. But like many other developments in AI, there's also a lot of hype and misunderstanding surrounding GPT-3, and many of the stories published about it misrepresent its capabilities. The blog written by GPT-3 resurfaced worries about fake news onslaughts, robots deceiving humans, and technological unemployment, which have become the hallmark of AI reporting.
A college kid created a fake, AI-generated blog. It reached #1 on Hacker News.
At the start of the week, Liam Porr had only heard of GPT-3. By the end, the college student had used the AI model to produce an entirely fake blog under a fake name. It was meant as a fun experiment. But then one of his posts found its way to the number-one spot on Hacker News. Few people noticed that his blog was completely AI-generated.
A college kid used AI to create a fake blog. It reached #1 on Hacker News.
GPT-3 is OpenAI's latest and largest language AI model, which the San Francisco–based research lab began drip-feeding out in mid-July. In February of last year, OpenAI made headlines with GPT-2, an earlier version of the algorithm, which it announced it would withhold for fear it would be abused. The decision immediately sparked a backlash, as researchers accused the lab of pulling a stunt. By November, the lab had reversed position and released the model, saying it had detected "no strong evidence of misuse so far." The lab took a different approach with GPT-3; it neither withheld it nor granted public access.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.50)
Companies are looking for React, machine learning, and Go - JAXenter
Sending off resumes can often feel like shouting into a void. It's always hard to see what strikes an employer's fancy. But thanks to the good people at Hacker News, it's even easier to see what they're looking for these days with their monthly tracking of the hiring trends. As a matter of course, Hacker News keeps an eye on all the terms used in "whoishiring" posts. It's an elegant way to keep track of what employers are really looking for in a candidate.