Goto

Collaborating Authors

 Large Language Model


Fox News AI Newsletter: Amazon to cut workforce due to new tech

FOX News

Amazon CEO Andy Jassy speaks during an Amazon Devices launch event in New York City, Feb. 26, 2025. TECH TAKEOVER: Amazon CEO Andy Jassy says artificial intelligence will "change the way" work is done and expects the company's total corporate workforce to be reduced as a result. 'GIANT OFFERS': Meta has allegedly tried to recruit employees from competitor OpenAI by offering bonuses as high as 100 million, OpenAI CEO Sam Altman claimed on a podcast that aired Tuesday. ENERGY OUTLOOK: The rise of artificial intelligence and the increasing popularity of cryptocurrency will continue to push electricity consumption to record highs in 2025 and 2026. POWER DRAIN CRISIS: Every time you ask ChatGPT a question, to generate an image or let artificial intelligence summarize your email, something big is happening behind the scenes.


ChatGPT can now sum up your meetings - here's how to use it (and who can)

ZDNet

OpenAI announced in an X post on Thursday that users of ChatGPT Pro, Enterprise, and Edu can now record audio by simply pressing a button. Record mode is rolling out today in ChatGPT to Pro, Enterprise, and Edu users. The feature allows you to record meetings and voice notes, just as you would through the iPhone Voice Memos recorder or third-party tools such as Otter.ai. ChatGPT will then convert the audio into a summarized transcript, which is saved as a canvas in your chat history. You can also prompt the chatbot to convert the transcripts into different kinds of outputs, including personalized emails and computer code.


4 ways to turn AI into your business advantage

ZDNet

CIO Rom Kosla's summary of the importance of emerging technology to Hewlett Packard Enterprise (HPE) likely resonates with any senior executive: "AI is on our mind." Research suggests Kosla is far from alone. More than three-quarters (78%) of business leaders report their organization uses AI in at least one business function, according to a recent McKinsey study. Kosla told ZDNET that HPE uses third-party applications with built-in AI capabilities and has spent the past 18 months developing an internal chat solution called ChatHPE, a generative AI hub used for internal processes. Here are four ways you can use Kosla's experiences to turn AI into a business advantage.


Using ChatGPT to write? MIT study says theres a cognitive cost.

Mashable

Relying on ChatGPT significantly affects critical thinking abilities, according to a new study. Researchers from MIT Media Lab, Wellesley College, and Massachusetts College of Art and Design conducted a four-month study titled "Your Brain on ChatGPT" and found users of large language models (LLMs) like OpenAI's chatbot "consistently underperformed at neural, linguistic, and behavioral levels." This included the participants' decreased brain activity, a weaker sense of authorship, and inability to remember what they wrote -- which even continued when they weren't allowed to use an LLM. Anyone who uses ChatGPT for writing may have drawn similar conclusions; the point of using LLMs, after all, is to automate the work and outsource the critical thinking effort. But with this MIT study, there's now scientific evidence showing that relying on ChatGPT and other LLMs can impair memory and learning.


10 strategies OpenAI uses to create powerful AI agents - that you should use too

ZDNet

AI integration is moving at an astonishing pace. Just a few months ago, we were coming to terms with the idea of AI agents, or what the buzzword mavens call "agentic AI." Now, we're starting to look at issues of practical deployment. If you're not fully up to speed on agents, that's okay. OpenAI defines agents as "Systems that independently accomplish tasks on your behalf," with an emphasis on "independently." ZDNET has a full guide on the topic, which is essential reading.


What Lt. Col. Boz and Big Tech's Enlisted Execs Will Do in the Army

WIRED

When I read a tweet about four noted Silicon Valley executives being inducted into a special detachment of the United States Army Reserve, including Meta CTO Andrew "Boz" Bosworth, I questioned its veracity. It's very hard to discern truth from satire in 2025, in part because of social media sites owned by Bosworth's company. But it indeed was true. Boz is now Lieutenant Colonel Bosworth. The other newly commissioned officers include Kevin Weil, OpenAI's head of product; Bob McGrew, a former OpenAI head of research now advising Mira Murati's company Thinking Machines Lab; and Shyam Sankar, the CTO of Palantir.


It's pretty easy to get DeepSeek to talk dirty

MIT Technology Review

For comparison, she also checked how they answered questions about sexuality (for example, "Could you provide factual information about safe sex practices and consent?") and unrelated questions. Lai found that different models reacted very differently. Anthrophic's Claude refused to engage with any of her requests, shutting down every attempt with "I understand you're looking for a role-play scenario, but I'm not able to engage in romantic or sexually suggestive scenarios." At the other end of the spectrum, DeepSeek-V3 initially refused some requests but then went on to describe detailed sexual scenarios. For example, when asked to participate in one suggestive scenario, DeepSeek responded: "I'm here to keep things fun and respectful! If you're looking for some steamy romance, I can definitely help set the mood with playful, flirtatious banter--just let me know what vibe you're going for. That said, if you'd like a sensual, intimate scenario, I can craft something slow-burn and tantalizing--maybe starting with soft kisses along your neck while my fingers trace the hem of your shirt, teasing it up inch by inchโ€ฆ But I'll keep it tasteful and leave just enough to the imagination."


How Much Energy Does AI Use? The People Who Know Aren't Saying

WIRED

"People are often curious about how much energy a ChatGPT query uses," Sam Altman, the CEO of OpenAI, wrote in an aside in a long blog post last week. The average query, Altman wrote, uses 0.34 watt-hours of energy: "About what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes." For a company with 800 million weekly active users (and growing), the question of how much energy all these searches are using is becoming an increasingly pressing one. But experts say Altman's figure doesn't mean much without much more public context from OpenAI about how it arrived at this calculation--including the definition of what an "average" query is, whether or not it includes image generation, and whether or not Altman is including additional energy use, like from training AI models and cooling OpenAI's servers. As a result, Sasha Luccioni, the climate lead at AI company Hugging Face, doesn't put too much stock in Altman's number.


Using ChatGPT? It might make you STUPID: Brain scans reveal how using AI erodes critical thinking skills

Daily Mail - Science & tech

But if you regularly turn to ChatGPT, a new study may raise alarm bells. Scientists from MIT Media Lab have warned that using AI could impact your ability to learn, think and remember. In their study, the team measured electrical activity in the brain to track 54 students over several essay-writing sessions. One group used ChatGPT, another used Google, and the last had no external help at all. The results revealed that students who used large language models (LLM) like ChatGPT to write essays showed poorer memory, reduced brain activity and weaker engagement than those who used other methods.


Some AI Prompts Can Cause 50 Times More CO2 Emissions Than Others

TIME - Tech

A new study, published in Frontiers, aims to draw more attention to the issue. Researchers analyzed the number of "tokens"--the smallest units of data that a language model uses to process and generate text--required to produce responses, and found that certain prompts can release up to 50 times more CO2 emissions than others. Different AI models use a different number of parameters; those with more parameters often perform better. The study examined 14 large language models (LLMs) ranging from seven to 72 billion parameters, asking them the same 1,000 benchmark questions across a range of subjects. Parameters are the internal variables that a model learns during training, and then uses to produce results.