russell
Ministers not doing enough to control AI, says UK professor
One of the professors at the forefront of artificial intelligence has said ministers are not doing enough to protect against the dangers of super-intelligent machines in the future. In the latest contribution to the debate about the safety of the ever-quickening development of AI, Prof Stuart Russell told the Times that the government was reluctant to regulate the industry despite the concerns that the technology could get out of control and threaten the future of humanity. Russell, a lecturer at the University of California in Berkeley and former adviser to the US and UK governments, told the Times he was concerned that ChatGPT, which was released in November, could become part of a super-intelligent machine that could not be constrained. "How do you maintain power over entities more powerful than you – for ever?" he asked. "If you don't have an answer, then stop doing the research. "The stakes couldn't be higher: if we don't control our own civilisation, we have no say in whether we continue to exist." After the release of ChatGPT to the public last year, which has been used to write prose and has already worried lecturers and teachers over its use in universities and schools, the debate has intensified over its safety in the long-term. Elon Musk, the Tesla founder and Twitter owner, and the Apple co-founder Steve Wozniak, along with 1,000 AI experts, wrote a letter to warn that there was an "out-of-control race" going on at AI labs and called for a pause on the creation of giant-scale AI. The letter warned the labs were developing "ever more powerful digital minds that no one, not even their creators, can understand, predict or reliably control". There is also concern about its wider application. A House of Lords committee this week heard evidence from Sir Lawrence Freedman, a war studies professor, who spoke about the concerns on how AI might be used in future wars. Google's rival, Bard, is due to be released in the EU later this year. Russell himself previously worked for the UN on how to monitor the nuclear test-ban treaty, and was asked to work with Whitehall earlier this year. He said: "The Foreign Office … talked to a lot of people and they concluded that loss of control was a plausible and extremely high-significance outcome." "And then the government came out with a regulatory approach that says: 'Nothing to see here … we'll welcome the AI industry as if we were talking about making cars or something like that'.
- Europe > United Kingdom (0.57)
- North America > United States > California (0.26)
- Government > Military (1.00)
- Government > Regional Government > Europe Government > United Kingdom Government (0.57)
em Bones and All /em Is Clearance-Rack Grand Guignol
I'm writing this post from the guest room in my mom's house, which is peppered with old knick-knacks of mine--to summon the spirit of my childhood room, I suppose. While flipping through my photo albums, I was tickled to find a blurry picture of the poster for Phone Booth, clearly taken by me on a disposable camera outside of a movie theater. I was probably too young to be watching a gunman thriller--thanks, Mom--but I'm pretty sure my affection for it had a lot to do with Colin Farrell, who was a relative unknown when that movie came out in 2002. To this day, I'm a bit gaga over him, though I think part of the reason my puppy love has turned into something more enduring is that, as I've gotten older and my tastes have evolved, so has the actor's persona. Not to downplay his macho heartthrob phase in the aughts--I still go catatonic whenever I think about him salsa dancing in Miami Vice, and I sense noted MV-heads Bilge and David feel the same way--but it has been a delight to see him take on increasingly stranger, more cerebral roles for directors like Yorgos Lanthimos and Sofia Coppola while also pushing himself, unafraid to get ugly and unhinged, in blockbusters like The Batman.
- Asia > Middle East > Republic of Türkiye > Batman Province > Batman (0.25)
- North America > United States > Virginia > Manassas (0.05)
- North America > United States > Missouri (0.05)
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
AI experts are increasingly afraid of what they're creating
In 2018 at the World Economic Forum in Davos, Google CEO Sundar Pichai had something to say: "AI is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire." Pichai's comment was met with a healthy dose of skepticism. AI translation is now so advanced that it's on the brink of obviating language barriers on the internet among the most widely spoken languages. College professors are tearing their hair out because AI text generators can now write essays as well as your typical undergraduate -- making it easy to cheat in a way no plagiarism detector can catch. AI-generated artwork is even winning state fairs. A new tool called Copilot uses machine learning to predict and complete lines of computer code, bringing the possibility of an AI system that could write itself one step closer.
- Asia > China (0.05)
- North America > United States > California (0.04)
- Information Technology > Services (0.66)
- Education > Educational Setting > Higher Education (0.34)
Machine Learning Approaches for Principle Prediction in Naturally Occurring Stories
Nahian, Md Sultan Al, Frazier, Spencer, Harrison, Brent, Riedl, Mark
Value alignment is the task of creating autonomous systems whose values align with those of humans. Past work has shown that stories are a potentially rich source of information on human values; however, past work has been limited to considering values in a binary sense. In this work, we explore the use of machine learning models for the task of normative principle prediction on naturally occurring story data. To do this, we extend a dataset that has been previously used to train a binary normative classifier with annotations of moral principles. We then use this dataset to train a variety of machine learning models, evaluate these models and compare their results against humans who were asked to perform the same task. We show that while individual principles can be classified, the ambiguity of what "moral principles" represent, poses a challenge for both human participants and autonomous systems which are faced with the same task.
- North America > United States > Kentucky (0.04)
- Europe > Austria (0.04)
The Future of Diabetes Care – Artificial Intelligence, Telemedicine, and Automated Insulin Delivery
A fascinating session at the EASD 2022 conference on emerging technologies shed light on where we are with AID and telemedicine, and what leading researchers in diabetes believe is coming next in diabetes management. Healthcare is rapidly evolving, and now more than ever, robots and artificial intelligence have gone from science fiction to critical components of diabetes management. At the EASD 2022 conference in Stockholm, Sweden, researchers further explored this concept in a session titled, "A New Hope or Strange New Worlds: Submerging diabetes into emerging technologies." Dr. Moshe Phillip, head of the Institute of Endocrinology and Diabetes at Schneider Children's Medical Center of Israel, began by demonstrating how continuous glucose monitors (CGMs) represent a paradigm shift in diabetes technology. "CGM is the most important tool in the last 20 years," he said.
- Europe > Sweden > Stockholm > Stockholm (0.25)
- Asia > Middle East > Israel (0.25)
- North America > United States (0.05)
- Europe > United Kingdom (0.05)
RStudio AI Blog: Starting to think about AI Fairness
The topic of AI fairness metrics is as important to society as it is confusing. Confusing it is due to a number of reasons: terminological proliferation, abundance of formulae, and last not least the impression that everyone else seems to know what they're talking about. This text hopes to counteract some of that confusion by starting from a common-sense approach of contrasting two basic positions: On the one hand, the assumption that dataset features may be taken as reflecting the underlying concepts ML practitioners are interested in; on the other, that there inevitably is a gap between concept and measurement, a gap that may be bigger or smaller depending on what is being measured. In contrasting these fundamental views, we bring together concepts from ML, legal science, and political philosophy.
Kevin Durant's latest bet
Ethos, the stealth insurtech firm that wants to make life insurance accessible, cheap, and easy, has officially come out of hiding with an $11.5m investment led by Sequoia Capital. But, your classic run of the mill VC giants aren't the only ones interested in this ever-growing industry… Of course, life insurance isn't a typical knee-jerk investment for Hollywood moguls, and hall-of-famers, but these days more venture capitalists are piling into the insurtech space. According to a May report, the number of venture capital investors joining the sector increased from 53 in 2012, to 217 in 2017 -- since that rise, those investors have shoved $9B into the industry. Why are they so interested in it? Life insurance is on the decline -- according to TechCrunch, only 60% of Americans have life insurance in 2018, down from 77% in 1989 -- and that's exactly why guys like Ironman and Beyonce's husband want a piece.
- Banking & Finance > Insurance (1.00)
- Banking & Finance > Capital Markets (0.91)
Volvo Takes Stake In Laser Vision Upstart Luminar, To Use Its Sensors In Self-Driving Cars
Lidar maker Luminar says its sensors generate the highest-resolution images currently available for autonomous vehicles. Luminar, a maker of laser lidar sensors that self-driving cars need to see their surroundings in 3-D, will supply the latest versions of its technology to Sweden's Volvo Cars, which is also investing in the Silicon Valley startup. Led by 23-year-old optics wunderkind Austin Russell, Luminar will provide both its hardware, combining the lidar sensor and cameras, and new perception software that helps the artificial intelligence driving Volvo's vehicles more rapidly interpret copious amounts of sensor data flowing in. The investment in Luminar by Volvo Cars Tech Fund is "significant," according to Russell, though neither he nor the automaker is disclosing the amount. "This is a partnership with Volvo Cars to power their autonomous vehicle development effort with our lidar sensing platform at its core," Russell, who founded Luminar in 2012 while still in high school, told Forbes.
- North America > United States > California (0.27)
- Europe > Sweden (0.25)
- North America > United States > Florida > Orange County > Orlando (0.05)
- North America > United States > Arizona (0.05)
Volvo Is Using Luminar's Lidar to Build Self-Driving Cars
The key technical hurdle standing between you and your truly self-driving car is a double-decker: the car needs to see its surroundings, and it needs to understand them, too. And today, Volvo announced a move that could help it clear both of those barriers: It has struck a deal with lidar maker Luminar, investing an undisclosed amount in the startup through its recently launched venture capital fund. Just about every player in the autonomous driving space agrees lidar--which builds a 3-D map of its surroundings by firing millions of laser pulses every second and measuring how long they take to bounce back--is a vital sensor. The trouble is that it's a relatively young technology, and it has taken a while for manufacturers to find the right mix of range, resolution, reliability, and cost. The biggest player in this space, Velodyne (which made the first lidar specifically for driving in 2005), sells its most capable sensor for $75,000.
- Automobiles & Trucks > Manufacturer (1.00)
- Transportation > Ground > Road (0.72)