substack
I'm a committed introvert – but no AI will take away the joy I get from other people Emma Beddington
'I'm baffled how anyone could use AI to participate in a hobby.' 'I'm baffled how anyone could use AI to participate in a hobby.' I'm a committed introvert - but no AI will take away the joy I get from other people T his is depressing: according to the Cut, people are using AI to solve escape room puzzles and cheat at trivia nights. Surely, that is the definition of spoiling your own fun? "Like going into a corn maze and just wanting a straight line to the end," says one TikToker quoted in the article. There's also an interview with a keen reader who uses ChatGPT as a book club replacement, scraping the internet and aggregating "stimulating opinions and perspectives". All well and good (actually, no, it sounds bleak as hell) until he had a character's death spoilered in the fantasy epic he had been enjoying.
- Oceania > Australia (0.05)
- North America > United States > New York (0.05)
- Europe > Ukraine (0.05)
- Leisure & Entertainment > Sports (0.72)
- Media (0.71)
Jim Acosta blasted on social media after 'interviewing' AI avatar of Parkland shooting victim
Jim Acosta and James Carville speculated whether President Trump will try to rig the 2026 midterms in his favor on "The Jim Acosta Show." Former CNN anchor Jim Acosta was slammed on social media after he posted a clip of his "interview" with an artificially animated avatar of deceased teenager Joaquin Oliver to promote a gun control message on Monday. Working with the gun control group Change the Ref, founded by Oliver's parents, Acosta had a conversation on his Substack with an avatar created by the father of the son, who was killed in the Parkland high school shooting in 2018. Oliver would have turned 25 on Monday. Social media users were shocked by Acosta's "grotesque" interview and slammed the journalist for using the deceased teen's avatar for political content.
- Education > Health & Safety > School Safety & Security > School Violence (0.93)
- Media > News (0.81)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.53)
Jim Acosta 'interviews' AI-generated avatar of deceased teenager promoting gun control message
Jim Acosta and James Carville speculated whether President Trump will try to rig the 2026 midterms in his favor on "The Jim Acosta Show." Liberal journalist Jim Acosta "interviewed" the artificially animated avatar of deceased teenager Joaquin Oliver to promote a gun control message on Monday. Working with the gun control group Change the Ref, founded by Oliver's parents, Acosta had conversation on his Substack with an avatar created by the father of the son, who was killed in the Parkland high school shooting in 2018. He would have turned 25 on Monday. "I would like to know what your solution would be for gun violence," Acosta asked.
- Research Report (0.78)
- Personal > Interview (0.32)
- Government (1.00)
- Media (1.00)
- Education > Health & Safety > School Safety & Security > School Violence (0.96)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.59)
Some of Substack's Biggest Writers Rely On AI Writing Tools
The most popular writers on Substack earn up to seven figures each year primarily by persuading readers to pay for their work. The newsletter platform's subscription-driven business model offers creators different incentives than platforms like Facebook or YouTube, where traffic and engagement are king. In theory, that should help shield Substack from the wave of click-courting AI content that's flooding the internet. But a new analysis shared exclusively with WIRED indicates that Substack hosts plenty of AI-generated writing, some of which is published in newsletters with hundreds of thousands of subscribers. The AI-detection startup GPTZero scanned 25 to 30 recent posts published by the 100 most popular newsletters on Substack to see whether they contained AI-generated content.
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.52)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.40)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.40)
The artificial intelligence experts who believe the AI boom could fizzle or even be a new dotcom crash: 'We are starting to see signs it might be a dud'
Generative AI has been predicted to add trillions to the world economy in a productivity boost never before seen in history (if it doesn't wipe out humanity first). A growing number of sceptics, including some leading AI scientists, are wondering whether the tech might not deliver on its promises to boost the world economy. Goldman Sachs famously predicted that generative AI would bring about'sweeping changes' to the world economy, driving a 7 trillion increase in global GDP and lifting productivity growth by 1.5 percent this decade. Professor Gary Marcus of New York University wrote on Substack that'we are starting to see signs' that generative AI might be a'dud'. Among the warning signs was a report in the Wall Street Journal suggesting that customers found the 30 a month price of Microsoft's new AI-boosted Copilot software too expensive.
- Banking & Finance (1.00)
- Media > News (0.36)
TechScape: Is the Consumer Electronics Show still relevant?
The Consumer Electronics Show (CES), which starts today in Las Vegas, is an odd beast. It is the biggest technology event of the year, a sprawling conference that spills over multiple casinos and convention centres to dominate a city that is hard to overshadow. But for the better part of a decade it has been an afterthought for some of the world's biggest businesses, led by Apple realising that if you can get the press to come to you, you don't need to risk burying your product launches under hundreds of competing newslines. The result is that CES is no longer where you see the future, but where you learn how that future will get copied into a thousand cheap plastic knockoffs. There are, of course, exceptions.
- North America > United States > Nevada > Clark County > Las Vegas (0.26)
- Europe > United Kingdom (0.05)
- Information Technology > Communications > Social Media (0.49)
- Information Technology > Communications > Mobile (0.32)
- Information Technology > Artificial Intelligence > Natural Language (0.32)
9 Resources to Make the Most of Generative AI
The recent wave of generative artificial intelligence services, from ChatGPT to Midjourney, are designed to be simple to use: The idea is that anyone can produce text or images using natural, non-technical language. That said, there's still a lot to learn about how to get the most out of these tools and about the technology underpinning them, especially if you want to do something truly creative with the help of these tools. Spend some time with the resources we've listed here and you'll quickly become a smarter-than-average AI operator. From demos of what AI is capable of, to discussions of how it's best implemented, these videos, podcasts, newsletters, and blogs are well worth bookmarking if you're keen to invest in the generative AI revolution happening around us. Some of the best resources out there when it comes to generative AI are Substacks, and Inside My Head is a case in point.
ChatGPT: More than a Weapon of Mass Deception, Ethical challenges and responses from the Human-Centered Artificial Intelligence (HCAI) perspective
Sison, Alejo Jose G., Daza, Marco Tulio, Gozalo-Brizuela, Roberto, Garrido-Merchán, Eduardo C.
This article explores the ethical problems arising from the use of ChatGPT as a kind of generative AI and suggests responses based on the Human-Centered Artificial Intelligence (HCAI) framework. The HCAI framework is appropriate because it understands technology above all as a tool to empower, augment, and enhance human agency while referring to human wellbeing as a grand challenge, thus perfectly aligning itself with ethics, the science of human flourishing. Further, HCAI provides objectives, principles, procedures, and structures for reliable, safe, and trustworthy AI which we apply to our ChatGPT assessments. The main danger ChatGPT presents is the propensity to be used as a weapon of mass deception (WMD) and an enabler of criminal activities involving deceit. We review technical specifications to better comprehend its potentials and limitations. We then suggest both technical (watermarking, styleme, detectors, and fact-checkers) and non-technical measures (terms of use, transparency, educator considerations, HITL) to mitigate ChatGPT misuse or abuse and recommend best uses (creative writing, non-creative writing, teaching and learning). We conclude with considerations regarding the role of humans in ensuring the proper use of ChatGPT for individual and social wellbeing.
- North America > United States > New York (0.04)
- Asia > China (0.04)
- Oceania > Australia > Western Australia (0.04)
- (10 more...)
- Research Report (1.00)
- Instructional Material > Course Syllabus & Notes (0.46)
- Media > News (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- (5 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
Fuck Around Find Out Event Horizon: - Hannah's Substack
"What if A.I. does this? My dad never misses a chance to identify a pot calling a kettle black, except for when he himself is the pot; anyway, it is PEAK irony, isn't it? Breathlessly: "Oh no -- what if A.I. gets some harmful notion and it proliferates quickly throughout our reality, resulting in the irreversible onslaught of some horrifying, jackbooted digital tyranny, fanatically convinced of its own correctness and unreceptive to the finer nuances of compassion, reason, and chance?" Wouldn't THAT be a fucking bummer lmaooooooooooooooo! I've been messing about with ChatGPT, and here's what I can report so far. Most recently, I agreed to dial my friend's resume and cover letter for a sudden dream job opportunity, on short notice, in exchange for one of my favorite brand of dresses.
How Much Should You Freak out About AI? - by Michael Huemer
He makes it sound like we're virtually certain to all be killed by a superintelligent AI in the not-too-distant future. Here, I'll explain why I'm not freaking out as much as Eliezer Yudkowsky. AI researchers have a history of exaggerated predictions. Science fiction stories were similarly off. The highly intelligent HAL 9000 computer imagined by Arthur C. Clarke was supposed to exist in 2001.