zdnet
People are already trying to get ChatGPT to write malware
The ChatGPT AI chatbot has created plenty of excitement in the short time it has been available and now it seems it has been enlisted by some in attempts to help generate malicious code. AI writing tools can help lighten your workload by writing emails and essays and even doing math. They use artificial intelligence to generate text or answer queries based on user input. ChatGPT is one popular example, but there are other noteworthy AI writers. ChatGPT is an AI-driven natural language processing tool which interacts with users in a human-like, conversational way. Among other things, it can be used to help with tasks like composing emails, essays and code.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.34)
ChatGPT is changing everything. But it still has its limits
Since its release in late November, ChatGPT has taken the world by storm. The chatbot's advanced AI abilities allow it to do tasks completely on its own, such as composing essays, emails and poems, writing and debugging code, and even passing exams. Now that a chatbot can do what humans do so well in a matter of seconds, what does that mean for our future? If you have had the chance to chat with the AI chatbot, you were probably impressed with how much it can understand and its ability to respond in a conversational manner. However, the chatbot is capable of doing much more, and its technical capabilities are tested every day.
- North America > United States > Minnesota (0.06)
- North America > United States > Pennsylvania (0.05)
- North America > United States > New York (0.05)
- Education > Educational Setting > Higher Education (0.35)
- Education > Curriculum (0.35)
In latest benchmark test of AI, it's mostly Nvidia competing against Nvidia
For lack of rich competition, some of Nvidia's most significant results in the latest MLPerf were against itself, comparing its newest GPU, H100 "Hopper," to its existing product, the A100. Although chip giant Nvidia tends to cast a long shadow over the world of artificial intelligence, its ability to simply drive competition out of the market may be increasing, if the latest benchmark test results are any indication. Did you miss out on Black Friday 2022? No problem: Cyber Monday deals are here, with internet retailers offering their lowest prices of the year. ZDNET is surfacing the latest and best sales online in real time for you to check out now.
AI's true goal may no longer be intelligence
AI has been rapidly finding industrial applications, such as the use of large language models to automate enterprise IT. Those applications may make the question of actual intelligence moot. The British mathematician Alan Turing wrote in 1950, "I propose to consider the question, 'Can machines think?'" His inquiry framed the discussion for decades of artificial intelligence research. For a couple of generations of scientists contemplating AI, the question of whether "true" or "human" intelligence could be achieved was always an important part of the work.
Meta's AI guru LeCun: Most of today's AI approaches will never lead to true intelligence
"I think AI systems need to be able to reason," says Yann LeCun, Meta's chief AI scientist. Today's popular AI approaches such as Transformers, many of which build upon his own pioneering work in the field, will not be sufficient. "You have to take a step back and say, Okay, we built this ladder, but we want to go to the moon, and there's no way this ladder is going to get us there," says LeCun. Yann LeCun, chief AI scientist of Meta Properties, owner of Facebook, Instagram, and WhatsApp, is likely to tick off a lot of people in his field. With the posting in June of a think piece on the Open Review server, LeCun offered a broad overview of an approach he thinks holds promise for achieving human-level intelligence in machines. Implied if not articulated in the paper is the contention that most of today's big projects in AI will never be able to reach that human-level goal. In a discussion this month with ZDNet via Zoom, LeCun made clear that he views with great skepticism many of the most successful avenues of research in deep learning at the moment. "I think they're necessary but not sufficient," the Turing Award winner told ZDNet of his peers' pursuits. Those include large language models such as the Transformer-based GPT-3 and their ilk. As LeCun characterizes it, the Transformer devotées believe, "We tokenize everything, and train giganticmodels to make discrete predictions, and somehow AI will emerge out of this." "They're not wrong," he says, "in the sense that that may be a component of a future intelligent system, but I think it's missing essential pieces." It's a startling critique of what appears to work coming from the scholar who perfected the use of convolutional neural networks, a practical technique that has been incredibly productive in deep learning programs. LeCun sees flaws and limitations in plenty of other highly successful areas of the discipline. Reinforcement learning will also never be enough, he maintains. Researchers such as David Silver of DeepMind, who developed the AlphaZero program that mastered Chess, Shogi and Go, are focusing on programs that are "very action-based," observes LeCun, but "most of the learning we do, we don't do it by actually taking actions, we do it by observing." Lecun, 62, from a perspective of decades of achievement, nevertheless expresses an urgency to confront what he thinks are the blind alleys toward which many may be rushing, and to try to coax his field in the direction he thinks things should go. "We see a lot of claims as to what should we do to push forward towards human-level AI," he says.
- Information Technology > Services (0.68)
- Leisure & Entertainment > Games (0.66)
- Transportation > Ground > Road (0.47)
- (2 more...)
Meta's AI luminary LeCun explores deep learning's energy frontier
So-called energy-based models, which borrow from statistical physics concepts, could lead to deep learning forms of AI that make abstract predictions, says Yann LeCun, Meta's chief scientist. Three decades ago, Yann LeCun, while at Bell Labs, formalized an approach to machine learning called convolutional neural networks that would prove to be profoundly productive in solving tasks such as image recognition. CNNs, as they're commonly known, are a workhorse of AI's deep learning, winning LeCun the prestigious ACM Turing Award, the equivalent of a Nobel for computing, in 2019. These days, LeCun, who is both a professor at NYU and chief scientist at Meta, is the most excited he's been in 30 years, he told ZDNet in an interview last week. The reason: New discoveries are rejuvenating a long line of inquiry that could turn out to be as productive in AI as CNNs are. That new frontier that LeCun is exploring is known as energy-based models. Whereas a probability function is "a description of how likely a random variable or set of random variables is to take on each of its possible states" (see Deep Learning, by Ian Goodfellow, Yoshua Bengio & Aaron Courville, 2019), energy-based models simplify the accordance between two variables. Borrowing language from statistical physics, energy-based models posit that the energy between two variables rises if they're incompatible and falls the more they are in accord. This can remove the complexity that arises in "normalizing" a probability distribution. It's an old idea in machine learning, going back at least to the 1980s, but there has been progress since then toward making energy-based models more workable.
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Europe > France (0.04)
How do we know AI is ready to be in the wild? Maybe a critic is needed
Mischief can happen when AI is let loose in the world, just like any technology. The examples of AI gone wrong are numerous, the most vivid in recent memory being the disastrously bad performance of Amazon's facial recognition technology, Rekognition, which had a propensity to erroneously match members of some ethnic groups with criminal mugshots to a disproportionate extent. Given the risk, how can society know if a technology has been adequately refined to a level where it is safe to deploy? "This is a really good question, and one we are actively working on," Sergey Levine, assistant professor with the University of California at Berkeley's department of electrical engineering and computer science, told ZDNet by email this week. Levine and colleagues have been working on an approach to machine learning where the decisions of a software program are subjected to a critique by another algorithm within the same program that acts adversarially.
Global Big Data Conference
The latest ZDNet survey on AI actionability and accountability finds that IT teams are taking a direct lead, with most companies building in-house systems. However, oversight of AI-generated decisions is lagging. Can businesses trust decisions that artificial intelligence and machine learning are churning out in increasingly larger numbers? Those decisions need more checks and balances -- IT leaders and professionals have to ensure that AI is as fair, unbiased, and as accurate as possible. This means more training and greater investments in data platforms.
- Information Technology > Data Science > Data Mining > Big Data (0.72)
- Information Technology > Artificial Intelligence > Machine Learning (0.68)