maskinelæring
Chegg Embraced AI. ChatGPT Ate Its Lunch Anyway
Investors were surprised when the online education company Chegg last month revealed that ChatGPT was hurting subscriber growth--the company lost half of its market value overnight. But long before Chegg became an index case for the disruptive force of ChatGPT, its top brass had heard plenty of warnings about the threat and opportunity of generative AI. For years, on afternoon walks outside Chegg's Silicon Valley headquarters, former executives say they had discussed someday slashing costs by tapping AI programs to replace an army of instructors that answer student questions and draft flashcards. Matthew Ramirez, a product leader who left Chegg two years ago, says he even advised CEO Dan Rosensweig in 2020 that generative AI would be the bus that ran down Chegg if it didn't prepare itself. And just weeks after OpenAI launched ChatGPT last November, a source familiar with the exchange says, one Chegg executive had the bot write an email to Rosensweig urging him to develop a ChatGPT rival.
- Education > Educational Setting > Online (0.57)
- Banking & Finance > Trading (0.37)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.83)
AI Is Being Used to 'Turbocharge' Scams
Code hidden inside PC motherboards left millions of machines vulnerable to malicious updates, researchers revealed this week. Staff at security firm Eclypsium found code within hundreds of models of motherboards created by Taiwanese manufacturer Gigabyte that allowed an updater program to download and run another piece of software. While the system was intended to keep the motherboard updated, the researchers found that the mechanism was implemented insecurely, potentially allowing attackers to hijack the backdoor and install malware. Elsewhere, Moscow-based cybersecurity firm Kaspersky revealed that its staff had been targeted by newly discovered zero-click malware impacting iPhones. Victims were sent a malicious message, including an attachment, on Apple's iMessage. The attack automatically started exploiting multiple vulnerabilities to give the attackers access to devices, before the message deleted itself.
- Asia > North Korea (0.51)
- Asia > Russia (0.30)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.25)
- (2 more...)
Detecting AI may be impossible. That's a big problem for teachers.
In a lengthy blog post last week, Turnitin Chief Product Officer Annie Chechitelli said the company wants to be transparent about its technology, but she didn't back off from deploying it. She said that for documents that its detection software thinks contain over 20 percent AI writing, the false positive rate for the whole document is less than 1 percent. But she didn't specify what the error rate is the rest of the time -- for documents its software thinks contain less than 20 percent AI writing. In such cases, Turnitin has begun putting an asterisk next to results "to call attention to the fact that the score is less reliable."
PeSTo: an AI tool for predicting protein interactions
The geometric deep-learning method (PeSTo) used to predict protein binding interfaces. The amino acids involved in the protein binding interface are highlighted in red. Proteins are essential to the biological functions of most living organisms. They have evolved to interact with other proteins, nucleic acids, lipids etc., and all of those interactions form large, "supra-molecular" complexes. This means that understanding protein interactions is crucial for understanding many cellular processes.
AI Is Not an Arms Race
The window of what AI can't do seems to be contracting week by week. Machines can now write elegant prose and useful code, ace exams, conjure exquisite art, and predict how proteins will fold. Last summer I surveyed more than 550 AI researchers, and nearly half of them thought that, if built, high-level machine intelligence would lead to impacts that had at least a 10% chance of being "extremely bad (e.g. On May 30, hundreds of AI scientists, along with the CEOs of top AI labs like OpenAI, DeepMind and Anthropic, signed a statement urging caution on AI: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." The simplest argument is that progress in AI could lead to the creation of superhumanly-smart artificial "people" with goals that conflict with humanity's interests--and the ability to pursue them autonomously.
The Leak That Has Big Tech and Regulators Panicked
In February, Meta released its large language model: LLaMA. Unlike OpenAI and its ChatGPT, Meta didn't just give the world a chat window to play with. Instead, it released the code into the open-source community, and shortly thereafter the model itself was leaked. Researchers and programmers immediately started modifying it, improving it, and getting it to do things no one else anticipated. And their results have been immediate, innovative, and an indication of how the future of this technology is going to play out.
- Law > Statutes (0.85)
- Government (0.84)
Nvidia: chipmaker's strategic AI moves result in a tech position of power
Nvidia saw its valuation soar to $1tn on Tuesday, making it the fifth most valuable American company and one of the first major corporate beneficiaries of the hype around AI. The chipmaker has been a major and in some cases dominant player in several industries for years. But no development has raised its profile – and its potential windfall – as much as the current excitement around generative AI. Nvidia has been around for 30 years. The company got its start in 1993 building graphics processing units (GPUs) for video games.
- North America > United States > New York > New York County > New York City (0.06)
- Europe > Switzerland (0.05)
- Asia > Taiwan (0.05)
- Information Technology > Hardware (1.00)
- Leisure & Entertainment > Games > Computer Games (0.35)
- Transportation > Ground > Road (0.30)
Risk of extinction by AI should be 'global priority', say tech experts
A group of leading technology experts from across the globe have warned that artificial intelligence technology should be considered a societal risk and prioritised in the same class as pandemics and nuclear wars. The brief statement, signed by hundreds of tech executives and academics, was released by the Center for AI Safety on Tuesday amid growing concerns over regulation and risks the technology poses to humanity. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the statement said. Signatories included the chief executives from Google's DeepMind, the ChatGPT developer OpenAI and AI startup Anthropic. The statement comes as global leaders and industry experts – such as the leaders of OpenAI – have made calls for regulation of the technology amid existential fears the technology could significantly affect job markets, harm the health of millions, and weaponise disinformation, discrimination and impersonation.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.83)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.49)
The biggest problem in AI? Lying chatbots
Companies are also spending time and money improving their models by testing them with real people. A technique called reinforcement learning with human feedback, where human testers manually improve a bot's answers and then feed them back into the system to improve it, is widely credited with making ChatGPT so much better than chatbots that came before it. A popular approach is to connect chatbots up to databases of factual or more trustworthy information, such as Wikipedia, Google search or bespoke collections of academic articles or business documents.
AI in dentistry: Researchers find that artificial intelligence can create better dental crowns
Fox News medical contributor Dr. Marc Siegel joins'Fox & Friends' to discuss the benefits of artificial intelligence in the medical industry if used with caution. Artificial intelligence is taking on an ever-widening role in the health and wellness space, assisting with everything from cancer detection to medical documentation. Soon, AI could make it easier for dentists to give patients a more natural, functional smile. Researchers from the University of Hong Kong recently developed an AI algorithm that uses 3D machine learning to design personalized dental crowns with a higher degree of accuracy than traditional methods, according to a press release from the university. The AI analyzes data from the teeth adjacent to the crown to ensure a more natural, precise fit than the crowns created using today's methods, the researchers said.
- Asia > China > Hong Kong (0.28)
- North America > United States > Texas > Harris County > Houston (0.16)
- Health & Medicine > Therapeutic Area > Dental and Oral Health (1.00)
- Health & Medicine > Therapeutic Area > Neurology > Alzheimer's Disease (0.31)