Goto

Collaborating Authors

 parker


AI Isn't Coming for Hollywood. It Has Already Arrived

WIRED

Lady Gaga probably wasn't thinking that a coup would unfold in her greenhouse. Then again, she was cohosting a party there with Sean Parker, the billionaire founder of Napster and first president of Facebook. It was February 2024, and the singer had invited guests to her 22.5 million oceanside estate in Malibu to mark the launch of a skin-care nonprofit. One of the organization's trustees was her boyfriend, whose day job was running the Parker Foundation. In the candlelit space, beside floor-to-ceiling windows that looked out over the Pacific, Parker's people mingled with Gaga's, nibbling focaccia and branzino alla brace to music from a string quartet (Grammy-winning, of course).


MathOptAI.jl: Embed trained machine learning predictors into JuMP models

Dowson, Oscar, Parker, Robert B, Bent, Russel

arXiv.org Artificial Intelligence

A recent trend in the mathematical optimization literature is to embed trained machine learning predictors into a larger optimization model. The m ost common application is for a practitioner to train a machine learning predictor as a sur rogate for a more complicated subsystem that cannot be directly embedded into an optimiza tion model, for example, because it does not have an algebraic form or because it is non -differentiable. L opez-Flores et al. (2024) provide a review of the field.


AI prototypes for UK welfare system dropped as officials lament 'false starts'

The Guardian

Ministers have shut down or dropped at least half a dozen artificial intelligence prototypes intended for the welfare system, the Guardian has learned, in a sign of the headwinds facing Keir Starmer's effort to increase government efficiency. Pilots of AI technology to enhance staff training, improve the service in jobcentres, speed up disability benefit payments and modernise communication systems are not being taken forward, freedom of information (FoI) requests reveal. Officials have internally admitted that ensuring AI systems are "scalable, reliable [and] thoroughly tested" are key challenges and say there have been many "frustrations and false starts". Not all trials would be expected to make it into regular use, but two of those now scrapped had been highlighted by the Department for Work and Pensions (DWP) in its latest annual report as examples of how it had "successfully tested multiple generative AI proofs of concept". A-cubed was intended to help staff steer jobseekers into work.


College Football 25: could this be the US's most anticipated sports video game ever?

The Guardian

Sports videogame releases are usually drab affairs. New versions come out every year, and beyond roster updates and a few gameplay tweaks, they don't change that much from edition to edition. But EA Sports College Football 25, which will be released worldwide on 19 July, isn't a typical game. It may well be the most anticipated sports video game release ever in the US. And to understand why, we need to go back to the beginning.


Could AI-generated content be dangerous for our health?

The Guardian

Neal Stephenson's 1992 novel Snow Crash is the book that launched a thousand startups. It was the first book to use the Hindu term avatar to describe a virtual representation of a person, it coined the term "metaverse", and was one of Mark Zuckerberg's pieces of required reading for new executives at Facebook a decade before he changed the focus of the entire company to attempt to build Stephenson's fictional world in reality. The plot revolves around an image that, when viewed in the metaverse, hijacks the viewer's brain, maiming or killing them. In the fiction of the world, the image crashes the brain, presenting it with an input that simply cannot be correctly processed. Perhaps the first clear example came four years earlier, in British SF writer David Langford's short story BLIT, which imagines a terrorist attack using a "basilisk", images which contain "implicit programs which the human equipment cannot safely run". In a sequel to that story, published in Nature in 1999, Langford draws earlier parallels, even pulling in Monty Python's Flying Circus, "with its famous sketch about the World's Funniest Joke that causes all hearers to laugh themselves to death".


'Time is running out': can a future of undetectable deepfakes be avoided?

The Guardian

With more than 4,000 shares, 20,000 comments, and 100,000 reactions on Facebook, the photo of the elderly woman, sitting behind her homemade 122nd birthday cake, has unquestionably gone viral. "I started decorating cakes from five years old," the caption reads, "and I can't wait to grow my baking journey." The picture is also unquestionably fake. If the curious candles – one seems to float in the air, attached to nothing – or the weird amorphous blobs on the cake in the foreground didn't give it away, then the fact the celebrant would be the oldest person in the world by almost five years should. Thankfully, the stakes for viral supercentenarian cake decorators are low.


Probabilistic Model Checking of Stochastic Reinforcement Learning Policies

Gross, Dennis, Spieker, Helge

arXiv.org Artificial Intelligence

We introduce a method to verify stochastic reinforcement learning (RL) policies. This approach is compatible with any RL algorithm as long as the algorithm and its corresponding environment collectively adhere to the Markov property. In this setting, the future state of the environment should depend solely on its current state and the action executed, independent of any previous states or actions. Our method integrates a verification technique, referred to as model checking, with RL, leveraging a Markov decision process, a trained RL policy, and a probabilistic computation tree logic (PCTL) formula to build a formal model that can be subsequently verified via the model checker Storm. We demonstrate our method's applicability across multiple benchmarks, comparing it to baseline methods called deterministic safety estimates and naive monolithic model checking. Our results show that our method is suited to verify stochastic RL policies.


Multi-Agent Verification and Control with Probabilistic Model Checking

Parker, David

arXiv.org Artificial Intelligence

Probabilistic model checking is a technique for formal automated reasoning about software or hardware systems that operate in the context of uncertainty or stochasticity. It builds upon ideas and techniques from a diverse range of fields, from logic, automata and graph theory, to optimisation, numerical methods and control. In recent years, probabilistic model checking has also been extended to integrate ideas from game theory, notably using models such as stochastic games and solution concepts such as equilibria, to formally verify the interaction of multiple rational agents with distinct objectives. This provides a means to reason flexibly about agents acting in either an adversarial or a collaborative fashion, and opens up opportunities to tackle new problems within, for example, artificial intelligence, robotics and autonomous systems. In this paper, we summarise some of the advances in this area, and highlight applications for which they have already been used. We discuss how the strengths of probabilistic model checking apply, or have the potential to apply, to the multi-agent setting and outline some of the key challenges required to make further progress in this field.


AI global supply chain: We have the tech, but full automation still 20 years away, expert says

FOX News

Angie Wisdom and Dr. Chirag Shah discuss how artificial intelligence could play a role in online and professional relationships. Humans may remain in vital roles as artificial intelligence begins to reshape many industries, but one expert argued that the global supply chain and shipping jobs may realize full automation within the next 20 years. "Right now, there's documented success in utilizing autonomous driving, but when we talk on when and how long [to fully automate], well, it's here now," Dr. Larry D. Parker Jr., department chair, supply chain & logistics, at American Public University System, told Fox News Digital. "Every industry that we've mentioned, the trucking, the air and all the other modes of cargo … right now, there's documented success in utilizing autonomous driving. But when we say fully [automated], I would say it will probably within the next 20 years."


When Workplace Surveillance Goes Terribly Wrong

Slate

This story is part of Future Tense Fiction, a monthly series of short stories from Future Tense and Arizona State University's Center for Science and the Imagination about how technology and science will change our lives. Amanda sat at her desk, picking at the same $30 Little Gem salad she ordered daily, suffering a small burning sensation in her gut that was triggered either by acid reflux or the dying embers of her rapidly expiring conscience. Of course, it was standard procedure for her husband to demand that the security firm Dark Metal surveil potential new hires for any of his multibillion-dollar companies, but this was the first time Amanda had been involved in contracting the private intelligence agency herself. Seedlings is your venture, Reid had promised her, even though he'd named himself CEO. I want you to take the lead on this. Amanda was COO of Seedlings and reported to her husband, who dismissed Amanda's concerns about the legal ramifications of their actions. Worrying about the law was something poor people did, Reid insisted. Besides, she'd never seen Reid do anything that nefarious with this type of information. But Maggie Everett was the type of candidate that pleased Reid. Amanda had done her job, which was to find Maggie, and the people at Dark Metal had done theirs, which was to surveil her and create a comprehensive biographical profile. This seemed like overkill to Amanda. Maggie wasn't in the running to become a high-profile executive at one of Reid's billion-dollar firms. She was being interviewed to work at a preschool. Certainly, Seedlings differed from other private preschools--there was the possibility Maggie would be exposed to confidential information. But this was what NDAs were for. Unleashing a network of spies upon a poor teacher who would ultimately be responsible for 10 toddlers seemed like an absurd waste of resources. And this was just Phase 1. Phase 2 would have to wait until after Maggie was hired, of course. Amanda reopened Dark Metal's inch-thick dossier. The logline: Maggie was smart but stupid. Smart: She'd majored in English at Yale, then received an MFA in creative writing from Brown, and finally a master's in early childhood education from Columbia. Stupid: She'd accumulated $103,345 in student debt, which she'd never pay off unless she took a job somewhere like Seedlings.