Plotting

Results


Crazy shapeshifting drone inspired by dragons forces itself around objects

FOX News

Graduate students at the University of Tokyo have created a group of futuristic-looking drone prototypes that can change their structural shape mid-air. Graduate students at the University of Tokyo have outdone themselves and are changing the way we look at drones with their newest invention. They created a group of futuristic-looking drone prototypes that can change their structural shape midair. As you will see in the video below, this could be a game changer if the drones were to be used by companies or the military for moving and transporting things. CLICK TO GET KURT'S CYBERGUY NEWSLETTER WITH QUICK TIPS, TECH REVIEWS, SECURITY ALERTS AND EASY HOW-TO'S TO MAKE YOU SMARTER The students were inspired by the idea of a dragon flying through the air, as we've seen in movies like "Game of Thrones," and how they can twist and turn their bodies as they fly.


The Delusion at the Center of the A.I. Boom

Slate

The HBO show is a prequel to Game of Thrones, and that series ended so badly I don't want anything more to do with that fictional world. But maybe A.I. could change my mind? At South by Southwest earlier this month, Greg Brockman, president of OpenAI, the company that created ChatGPT, said, "Imagine if you could ask your A.I. to make a new ending that goes in a different way." Could A.I. be the solution to fixing every novel and script someone has a problem with--customizing revisions to make them shorter or longer, less or more violent, more or less "woke"? Even if A.I. could make changes to movies and books that you personally find dissatisfying, part of those works' value lies in the shared conversations they inspire--conversations that require opinions about common, historically situated texts.


Vanderbilt staff apologizes after using AI to send campus email about Michigan State shooting

FOX News

Rep. Bill Huizenga, R-Mich., joined'Fox & Friends First' to discuss the latest details surrounding the fatal shooting at Michigan State University and an upcoming briefing on the flying objects in U.S. airspace. Members of the Vanderbilt staff apologized on Friday for using ChatGPT, an artificial intelligence (AI) generator, to send an email to students calling for the community to come together following the shooting at Michigan State University. The email was sent on Thursday by the Peabody Office of Equity, Diversity, and Inclusion (EDI) at the university's Peabody College and included a note at the bottom that indicated the email had been written using ChatGPT, Vanderbilt's official student newspaper, The Vanderbilt Hustler, first reported on Friday. Associate Dean Nicole Joseph sent another email on Friday and said using ChatGPT to write the email was "poor judgment," according to the Hustler. OpenAI ChatGPT seen on mobile with AI Brain seen on screen.


California Berkeley university campus worker finds human skeleton in unused residence building: police

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. Skeletonized human remains were found in an unused residence hall on the campus of University of California, Berkeley last week, officials said. The skeleton was found in the shuttered graffiti-ridden building on the Clark Kerr Campus on Jan. 10, but it remains unclear how many years the remains were there, police said. FILE: A view of the UC Berkeley campus is seen from this drone view in Berkeley, Calif., on Monday, Nov. 28, 2022.


Deepfake: Curbing A Prolific Phenomenon

#artificialintelligence

The threat of deepfake content has become a prevalent issue since 2017, where a user started a viral phenomenon by combining machine learning software and AI to create inappropriate content with the faces of famous celebrities. Utilising a form of artificial intelligence called deep learning to manipulate and produce falsified pieces of content, deepfakes are the 21st century's answer to Photoshop. As the technology continues to develop and spread, deepfakes have started becoming a concern of the public. The World Intellectual Property Organisation states that deepfakes can cause problems such as violation of human rights, right of privacy and personal data protection rights. With this technology being relatively new, the public has not yet acquainted itself to the dangers of this technology.


The Bruce Willis Deepfake Is Everyone's Problem

#artificialintelligence

Jean-Luc Godard once claimed, regarding cinema, "When I die, it will be the end." Godard passed away last month; film perseveres. Yet artificial intelligence has raised a kindred specter: that humans may go obsolete long before their artistic mediums do. Novels scribed by GPT-3; art conjured by DALL·E--machines could be making art long after people are gone. As deepfakes evolve, fears are mounting that future films, TV shows, and commercials may not need them at all.


The Bruce Willis Deepfake Is Everyone's Problem

WIRED

Jean-Luc Godard once claimed, regarding cinema, "When I die, it will be the end." Godard passed away last month; film perseveres. Yet artificial intelligence has raised a kindred specter: that humans may go obsolete long before their artistic mediums do. Novels scribed by GPT-3; art conjured by DALL·E--machines could be making art long after people are gone. As deepfakes evolve, fears are mounting that future films, TV shows, and commercials may not need them at all.



Social intelligence is not sentience

#artificialintelligence

On Saturday morning, June 11, Jeff Bezo's newspaper The Washington Post published a story under the headline "The Google engineer who thinks the company's AI has come to life." The headline was followed by a brief explanation of Blake Lamoine, a Southern grown, former U.S. military, ex-convict, Christian mystic, AI researcher, father, and genius of compassion (I added that last part) and his belief that there's "a ghost in the machine."* If your eyes haven't rolled to the back of your head yet, then chances are you're reading this from the front porch of a double-wide trailer parked somewhere below the Mason Dixon with a glass of sweet tea in your hand and a coon dog at your feet. Which is clearly not something any "reasonable" person would choose to do in the year 2022. Or if, like me, you're a bit more progressed from the stereotype, you might be standing in front of a classroom of semi-attentive undergraduate students at a Southeastern research university making your best effort to bridge the ever-widening practical and theoretical gaps between old-world journalistic traditions and new-age neoliberal ideologies related to the function of human language in society.


The Coming AI Hackers

#artificialintelligence

Artificial intelligence--AI--is an information technology. And it is already deeply embedded into our social fabric, both in ways we understand and in ways we don't. It will hack our society to a degree and effect unlike anything that's come before. I mean this in two very different ways. One, AI systems will be used to hack us. And two, AI systems will themselves become hackers: finding vulnerabilities in all sorts of social, economic, and political systems, and then exploiting them at an unprecedented speed, scale, and scope. We risk a future of AI systems hacking other AI systems, with humans being little more than collateral damage. Okay, maybe it's a bit of hyperbole, but none of this requires far-future science-fiction technology. I'm not postulating any "singularity," where the AI-learning feedback loop becomes so fast that it outstrips human understanding. My scenarios don't require evil intent on the part of anyone. We don't need malicious AI systems like Skynet (Terminator) or the Agents (Matrix). Some of the hacks I will discuss don't even require major research breakthroughs. They'll improve as AI techniques get more sophisticated, but we can see hints of them in operation today. This hacking will come naturally, as AIs become more advanced at learning, understanding, and problem-solving. In this essay, I will talk about the implications of AI hackers. First, I will generalize "hacking" to include economic, social, and political systems--and also our brains. Next, I will describe how AI systems will be used to hack us. Then, I will explain how AIs will hack the economic, social, and political systems that comprise society. Finally, I will discuss the implications of a world of AI hackers, and point towards possible defenses. It's not all as bleak as it might sound. Caper movies are filled with hacks. Hacks are clever, but not the same as innovations. Systems tend to be optimized for specific outcomes. Hacking is the pursuit of another outcome, often at the expense of the original optimization Systems tend be rigid. Systems limit what we can do and invariably, some of us want to do something else. But enough of us are. Hacking is normally thought of something you can do to computers. But hacks can be perpetrated on any system of rules--including the tax code. But you can still think of it as "code" in the computer sense of the term. It's a series of algorithms that takes an input--financial information for the year--and produces an output: the amount of tax owed. It's deterministic, or at least it's supposed to be.