Goto

Collaborating Authors

 groq


Abusive text transformation using LLMs

Chandra, Rohitash, Choi, Jiyong

arXiv.org Artificial Intelligence

Although Large Language Models (LLMs) have demonstrated significant advancements in natural language processing tasks, their effectiveness in the classification and transformation of abusive text into non-abusive versions remains an area for exploration. In this study, we aim to use LLMs to transform abusive text (tweets and reviews) featuring hate speech and swear words into non-abusive text, while retaining the intent of the text. We evaluate the performance of two state-of-the-art LLMs, such as Gemini, GPT-4o, DeekSeek and Groq, on their ability to identify abusive text. We them to transform and obtain a text that is clean from abusive and inappropriate content but maintains a similar level of sentiment and semantics, i.e. the transformed text needs to maintain its message. Afterwards, we evaluate the raw and transformed datasets with sentiment analysis and semantic analysis. Our results show Groq provides vastly different results when compared with other LLMs. We have identified similarities between GPT-4o and DeepSeek-V3.



Startup Tenstorrent shows AI is changing computing and vice versa

#artificialintelligence

That year, numerous experienced computer chip designers set out on their own to design novel kinds of parts to improve the performance of artificial intelligence. It's taken a few years, but the world is finally seeing what those young hopefuls have been working on. The new chips coming out suggest, as ZDNet has reported in past, that AI is totally changing the nature of computing. It also suggests that changes in computing are going to have an effect on how artificial intelligence programs, such as deep learning neural networks, are designed. Case in point, startup Tenstorrent, founded in 2016 and headquartered in Toronto, Canada, on Thursday unveiled its first chip, "Grayskull," at a microprocessor conference run by the legendary computer chip analysis firm The Linley Group.


Artificial Intelligence Is Driving A Silicon Renaissance

#artificialintelligence

Bay Area startup Cerebras Systems recently unveiled the largest computer chip in history, ... [ ] purpose-built for AI. The semiconductor is the foundational technology of the digital age. It gave Silicon Valley its name. It sits at the heart of the computing revolution that has transformed every facet of society over the past half-century. The pace of improvement in computing capabilities has been breathtaking and relentless since Intel introduced the world's first microprocessor in 1971.


Artificial Intelligence Is Driving A Silicon Renaissance

#artificialintelligence

Bay Area startup Cerebras Systems recently unveiled the largest computer chip in history, ... [ ] purpose-built for AI. The semiconductor is the foundational technology of the digital age. It gave Silicon Valley its name. It sits at the heart of the computing revolution that has transformed every facet of society over the past half-century. The pace of improvement in computing capabilities has been breathtaking and relentless since Intel introduced the world's first microprocessor in 1971.


EETimes - Groq's AI Chip Debuts in the Cloud -

#artificialintelligence

Groq's tensor streaming processor (TSP) silicon is now available to accelerate customers' AI workloads in the cloud. Cloud service provider Nimbix now offers machine learning acceleration on Groq hardware as an on-demand service for "selected customers" only. While there are several startups building AI silicon for the data center, Groq now joins Graphcore as the only two with accelerators commercially available for customers to use as part of a cloud service. Graphcore previously announced its accelerators are available as part of Microsoft Azure. "Groq's simplified processing architecture is unique, providing unprecedented, deterministic performance for compute intensive workloads, and is an exciting addition to our cloud-based AI and Deep Learning platform," said Steve Hebert, Nimbix' CEO.


Groq Selects Synopsys ZeBu Server 4 for Its TSP Architecture Development

#artificialintelligence

MOUNTAIN VIEW, Calif., April 13, 2020 -- Synopsys, Inc. announced that Groq has adopted the Synopsys ZeBu Server 4 emulation solution for its Tensor Streaming Processor (TSP) architecture development. ZeBu Server 4 performance and capacity enabled first silicon success of Groq's TSP architecture for artificial intelligence (AI) and machine learning platforms. ZeBu also enabled optimization and validation of Groq's TSP architecture prior to silicon, resulting in unmatched performance for throughput and latency. "As we redefine compute technology with our unique single-core architecture, we are enabling the development of artificial intelligence and machine learning platforms that offer twice the inference performance while drastically reducing infrastructure costs," said Adrian Mendes, chief operating officer at Groq. "Synopsys ZeBu Server 4 Cloud solution delivered the performance and capacity required to efficiently analyze performance of our Tensor Streaming Processor, enabling us to focus on silicon innovation." ZeBu Server 4 is the industry's fastest emulation system offering 2X higher performance over competitive solutions.


Groq Announces World's First Architecture Capable of 1,000,000,000,000,000 Operations per Second on a Single Chip

#artificialintelligence

Groq – the fast growing start-up, inventor of the Tensor Streaming Processor (TSP) architecture and new class of compute – today announced that its new Tensor Streaming Processor (TSP) architecture is capable of 1 PetaOp/s performance on a single chip implementation. The Groq architecture is the first in the world to achieve this level of performance, which is equivalent to one quadrillion operations per second, or 1e15 ops/s. Groq's architecture is also capable of up to 250 trillion floating-point operations per second (FLOPS). "We are excited for the industry and our customers," said Jonathan Ross, Groq's co-founder and CEO. "Top GPU companies have been telling customers that they'd hoped to be able to deliver one PetaOp/s performance within the next few years; Groq is announcing it today, and in doing so setting a new performance standard. The Groq architecture is many multiples faster than anything else available for inference, in terms of both low latency and inferences per second. Our customer interactions confirm that. We had first silicon back, first-day power-on, programs running in the first week, sampled to partners and customers in under six weeks, with A0 silicon going into production."


AI Hardware Built from a Software-first Perspective: Groq's Flexible Silicon Architecture - News

#artificialintelligence

Semiconductor industry startups are usually founded by hardware engineers who develop a silicon architecture and then figure out how to map software for that specific hardware. Here is a tale of a chip startup founded in the age of artificial intelligence (AI) that has a software DNA. Groq was founded in 2016 by a group of software engineers who wanted to solve AI problems from the software side. When they approached the issue without any preconceptions of what an AI architecture may need to look like, they were able to create an architecture that can be mapped to different AI models. The company is focused on the inference market for data centers and autonomous vehicles, and its first product is a PCIe plug-in card for which Groq designed the ASIC and AI accelerator and developed the software stack.


Intel, GraphCore And Groq: Let The AI Cambrian Explosion Begin

#artificialintelligence

As we approach the end of a year full of promises from AI startups, a few companies are meeting their promised 2019 launch dates. These include Intel, with its long-awaited Nervana platform, UK startup Graphcore and the stealthy Groq from Silicon Valley. Some of these announcements fall a bit short on details, but all claim to represent breakthroughs in performance and efficiency for training and/or inference processing. Other recent announcements include Cerebras's massive wafer-scale AI engine inside its multi-million dollar CS-1 system and NVIDIA's support for GPUs on ARM-based servers. I'll opine on those soon, but here I will focus on Intel, Graphcore and Groq's highly anticipated chips.