Goto

Collaborating Authors

 bureaucracy


"Ballerina" Leaps Into John Wick's Bloody World

The New Yorker

It's been instructive to see "Ballerina," which opens this week, so soon after the new "Mission: Impossible" installment. In the latter, it's hard to top Tom Cruise's intrepid stunt work, which reaches its zenith in a pair of extended sequences (one in a submarine, the other on biplanes), but the story, involving a diabolical scheme using A.I. to commandeer and launch the world's nuclear weaponry, is a mere pretext. Going to "Mission: Impossible" for the story is like going to Casablanca for the waters. In contrast, "Ballerina"--like the four John Wick films that it's spun off from--is, strangely, far better at story than at action. The first John Wick film is the weakest, because the framework for the franchise was still unformed: a retired hit man (Keanu Reeves) gets back into action to respond to a mobster's attacks.


Meet the young team of software engineers slashing government waste at DOGE: report

FOX News

Fox News host Laura Ingraham gives her take on the spending freeze on USAID on'The Ingraham Angle.' Tesla and Space X CEO Elon Musk's DOGE efforts to slash government waste and streamline the federal bureaucracy include the hiring of several up-and-coming young software engineers tasked with "modernizing federal technology and software to maximize governmental efficiency and productivity." Six young men between the ages of 19 and 24 -- Akash Bobba, Edward Coristine, Luke Farritor, Gautier Cole Killian, Gavin Kliger and Ethan Shaotran -- have taken up various roles furthering the DOGE agenda, according to a report from Wired. Bobba was part of the highly regarded Management, Entrepreneurship, and Technology program at UC Berkeley and has held internships at the Bridgewater Associates hedge fund, Meta and Palantir. "Let me tell you something about Akash," Grata AI CEO Charis Zhang posted on X about Bobba in recent days. "During a project at Berkeley, I accidentally deleted our entire codebase 2 days before the deadline. Akash just stared at the screen, shrugged, and rewrote everything from scratch in one night -- better than before. We submitted early and got first in the class. I trust him with everything I own."


Joe Biden Has a Secret Weapon Against Killer AI. It's Bureaucrats

WIRED

As ChatGPT's first birthday approaches, presents are rolling in for the large language model that rocked the world. From President Joe Biden comes an oversized "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." And UK prime minister Rishi Sunak threw a party with a cool extinction-of-the-human-race theme, wrapped up with a 28-country agreement (counting the EU as a single country) promising international cooperation to develop AI responsibly. Before anyone gets too excited, let's remember that it has been over half a century since credible studies predicted disastrous climate change. Now that the water is literally lapping at our feet and heat is making whole chunks of civilization uninhabitable, the international order has hardly made a dent in the gigatons of fossil fuel carbon dioxide spewing into the atmosphere.


China could use AI deepfake technology to disrupt 2024 election, GOP senator warns

FOX News

Senator Pete Ricketts of Nebraska told Fox News Digital on Thursday that he's concerned about China's use of Artificial Intelligence (AI) after a report claimed pro-Chinese groups were spreading CCP propaganda using AI-generated news anchors. EXCLUSIVE: China's expansive artificial intelligence (AI) operations could play a concerning role in the 2024 election cycle, Sen. Pete Ricketts warned on Thursday. "There's absolutely a possibility that they could do that for the 2024 election, and that's what we have to be on guard [for]," Ricketts told Fox News Digital in an interview in his Senate office. During a Senate Foreign Relations subcommittee hearing earlier this month, Ricketts referenced China and its use of AI technology to create "deepfakes," which are fabricated videos and images that can look and sound like real people and events. A report released earlier this year by a U.S.-based research firm claimed a "pro-Chinese spam operation" was using AI deepfakes technology to create videos of fake news anchors reciting Beijing's propaganda.


Too much AI has big drawbacks for doctors -- and their patients

#artificialintelligence

Artificial intelligence in medical care is here to stay -- but it can do more harm than good, especially if those implementing it lose sight of the essential importance of a doctor's clinical judgment. As a primary-care physician, my job is to evaluate and re-evaluate a patient in an ongoing personalized way even the best AI could never attain. Here's an example: An 80-year-old patient of mine with chronic heart failure drank and ate too much on a recent Caribbean cruise and ended up in a hospital, his lungs filled with fluid. A cardiac echo revealed an ejection fraction (how well the heart is pumping) of only 15%. In fact, a recent study concluded AI might have assessed that ejection fraction better than the cardiologist who did so, and this assessment is clearly going to be an important role for AI. But the actual management of the patient went well beyond a simple number.


AI expert in Congress warns against rush to regulation: 'We're not there yet'

FOX News

FOX Business correspondent Lydia Hu has the latest on jobs at risk as AI further develops on'America's Newsroom.' The only member of Congress with an advanced degree in artificial intelligence says lawmakers should move slowly to impose new regulations on AI, in part because policymakers and even experts in the field have yet to lay out clear regulatory objectives. Rep. Jay Obernolte, R-Calif., says this deliberate approach is a good thing, despite pressure from high-profile tech leaders to halt AI development until its dangers are better understood. In an interview with Fox News Digital, Obernolte said it makes no sense to start regulating until Congress knows precisely what dangers it's trying to avoid. "Before we can create a regulatory framework around AI, we have to very explicit about what our goals are with our regulation," Obernolte said.

  Country:
  Genre: Play > Prospect (0.30)
  Industry:

Artificial Intelligence vs Human Intelligence: Who Takes the Cake on Indonesia's Bureaucracy?

#artificialintelligence

The use of technology to improve human life and activity has long been implemented. Nowadays, we see technological innovations beyond what our predecessors could have ever imagined. People use to travel by foot or riding an animal of some sort. Then comes the invention of carriages with (again) animals to pull it. Many years later, we now have cars–which is basically an automated carriage if you think about it–, trains, ships, planes, and all other sorts of vehicle I haven't mentioned. This is only on transportation technology.


Building better startups with responsible AI – TechCrunch

#artificialintelligence

Founders tend to think responsible AI practices are challenging to implement and may slow the progress of their business. They often jump to mature examples like Salesforce's Office of Ethical and Humane Use and think that the only way to avoid creating a harmful product is building a big team. The truth is much simpler. I set out to learn how founders were thinking about responsible AI practices on the ground by speaking with a handful of successful early-stage founders and found many of them were implementing responsible AI practices. They just call it "good business."


Center for Applied Data Ethics suggests treating AI like a bureaucracy

#artificialintelligence

A recent paper from the Center for Applied Data Ethics (CADE) at the University of San Francisco urges AI practitioners to adopt terms from anthropology when reviewing the performance of large machine learning models. The research suggests using this terminology to interrogate and analyze bureaucracy, states, and power structures in order to critically assess the performance of large machine learning models with the potential to harm people. "This paper centers power as one of the factors designers need to identify and struggle with, alongside the ongoing conversations about biases in data and code, to understand why algorithmic systems tend to become inaccurate, absurd, harmful, and oppressive. This paper frames the massive algorithmic systems that harm marginalized groups as functionally similar to massive, sprawling administrative states that James Scott describes in Seeing Like a State," the author wrote. The paper was authored by CADE fellow Ali Alkhatib, with guidance from director Rachel Thomas and CADE fellows Nana Young and Razvan Amironesei. The researchers particularly look to the work of James Scott, who has examined hubris in administrative planning and sociotechnical systems.


Human Touch Keeps AI From Getting Out of Touch - AI Trends

#artificialintelligence

AI is charting new ways to become out of touch, potentially. Maybe the frame of mind around agile, sometimes spontaneous, software development that had been going on in decentralized organizations before AI took over, is coming into conflict with the mindset needed to feed AI systems with a constant high-volume flow of clean, well-structured data. This suggestion was broached by Sylvain Duranton, senior partner at Boston Consulting Group, in a recent TED Talk. "For the last 10 years, many companies have been trying to become less bureaucratic, to have fewer central rules and procedures, more autonomy for their local teams to be more agile. And now they are pushing artificial intelligence, AI, unaware that cool technology might make them more bureaucratic than ever," he stated in a recent account in Forbes.