hogarth
Why Europe's Efforts to Gain AI Autonomy Might Be Too Little Too Late
This week Microsoft announced that it would invest 3.2 billion ( 3.5 billion) in Germany over the next two years. The U.S. tech giant will use the money to double the capacity of its artificial intelligence and data center infrastructure in Germany and expand its training programmes, according to Microsoft vice chair and president Brad Smith. The move follows a similar announcement from November 2023, when Microsoft said it would invest 2.5 billion ( 3.2 billion) in infrastructure in the U.K. over the next three years. Both countries hailed the investments as significant steps that would permit them to compete on the world stage when it comes to AI. However, the investments are dwarfed by investments made by U.S.-based cloud service providers elsewhere, particularly in the U.S. As AI becomes increasingly economically and militarily important, governments are taking steps to ensure they have control over the technology that they depend on.
- Europe > Germany (0.84)
- Europe > United Kingdom (0.28)
- North America > United States (0.27)
- (2 more...)
- Government (1.00)
- Information Technology > Services (0.93)
How Co-Regulation Became the Parenting Buzzword of the Day
On a recent evening, my children and I were watching "The Iron Giant," the animated cult classic about a robot from outer space who, in 1957, crash-lands in the woods outside a small town in Maine, befriends a young boy, and wages battle against both a murderously stupid G-man and his own robo-programming as a sentient weapon of war. The boy, named Hogarth, and his mother, Annie, get by on her income as a diner waitress, and, late one night, she comes home from a draining double shift to find her son missing. Frantic with worry, Annie drives around until she locates Hogarth at the edge of the woods--on his own and perfectly fine--where he manically chatters at her about the big metal alien he claims to have spotted nearby. Then she catches herself and, with effort, takes on a low, steadier voice. "I'm not in the mood," she says.
- North America > United States > Maine (0.25)
- North America > United States > Pennsylvania > Westmoreland County (0.05)
AI robots capable of carrying out attack on NHS that would cause COVID-like disruption, expert warns
Kara Frederick, tech director at the Heritage Foundation, discusses the need for regulations on artificial intelligence as lawmakers and tech titans discuss the potential risks. Robots run by artificial intelligence have the potential to attack the United Kingdom's National Health Service (NHS) and cause a disruption on the scale of the COVID-19 pandemic, according to a cybersecurity expert. Ian Hogarth, who works on the U.K.'s AI task force that was formed to help protect against the risks of AI, says that the growing technology is capable of an attack that could cripple the country's NHS or even carry out a "biological attack," according to a report in the Daily Star. Hogarth noted that AI technology continues to improve at a rapid pace, something he warned would lower barriers to "perpetrating some kind of cyber attack or cyber crime." A photo shows a sign of the London Ambulance Service of NHS in London.
- Europe > United Kingdom (1.00)
- Asia > China (0.08)
Malicious use of AI could cause 'unimaginable' damage, says UN boss
Malicious use of artificial intelligence systems could cause a "horrific" amount of death and destruction, the UN secretary general has said, calling for a new UN body to tackle the threats posed by the technology. António Guterres said harmful use of AI for terrorist, criminal or state purposes could also cause "deep psychological damage", and he said AI-enabled cyber-attacks were already targeting UN peacekeeping and humanitarian operations. "The malicious use of AI systems for terrorist, criminal or state purposes could cause horrific levels of death and destruction, widespread trauma and deep psychological damage on an unimaginable scale," Guterres said. Speaking at the first UN security council session on AI, he said the advent of generative AI – the term for AI tools such as ChatGPT that produce convincing text, image and voice from human prompts – could be a defining moment for disinformation and hate speech and add a "new dimension" to the manipulation of human behaviour. Guterres called for the creation of a new UN entity along the lines of the Intergovernmental Panel on Climate Change to tackle the risks.
- Government > Military (0.74)
- Government > Regional Government > Europe Government > United Kingdom Government (0.34)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Applied AI (1.00)
- Information Technology > Artificial Intelligence > Natural Language (0.81)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.58)
AI watch: from Wimbledon to job losses in journalism
Artificial intelligence is either going to save humanity or finish it off, depending on who you speak to. Either way, every week there are new developments and breakthroughs. The Wimbledon tennis tournament revealed it will be introducing AI-generated audio and text commentary in its online highlights this year. The All England Club has teamed up with the tech group IBM to provide automatically created voiceovers and captions for its footage. The move, which is separate to the BBC's coverage of the tournament, follows use of the cloned voice of a British athletics commentator, Hannah England, for online coverage of the European Athletics Championships.
- Europe > United Kingdom > England > Greater London > London > Wimbledon (0.61)
- North America > United States (0.05)
- Europe > Germany (0.05)
- Media (1.00)
- Leisure & Entertainment > Sports > Tennis (1.00)
From pope's jacket to napalm recipes: how worrying is AI's rapid growth?
When the boss of Google admits to losing sleep over the negative potential of artificial intelligence, perhaps it is time to get worried. Sundar Pichai told the CBS programme 60 Minutes this month that AI could be "very harmful" if deployed wrongly, and was developing fast. "So does that keep me up at night? Google has launched Bard, a chatbot to rival the ChatGPT phenomenon, and its parent, Alphabet, owns the world-leading DeepMind, a UK-based AI company. He is not the only AI insider to voice concerns. Last week, Elon Musk said he had fallen out with the Google co-founder Larry Page because Page was "not taking AI safety seriously enough". Musk told Fox News that Page wanted "digital superintelligence, basically a digital god, if you will, as soon as possible". So how much of a danger is posed by unrestrained AI development? Musk is one of thousands of signatories to a letter published by the Future of Life Institute, a thinktank, that called for a six-month moratorium on the creation of "giant" AIs more powerful than GPT-4, the system that underpins ChatGPT and the chatbot integrated with Microsoft's Bing search engine. The risks cited by the letter include "loss of control of our civilization". The approach to product development shown by AI practitioners and the tech industry would not be tolerated in any other field, said Valérie Pisano, another signatory to the letter. Pisano, the chief executive of Mila – the Quebec Artificial Intelligence Institute – says work was being carried out to make sure that these systems were not racist or violent, in a process known as alignment (ie, making sure they "align" with human values). But then they were released into the public realm. "The technology is put out there, and as the system interacts with humankind, its developers wait to see what happens and make adjustments based on that.
- North America > Canada > Quebec (0.25)
- Europe > United Kingdom (0.25)
- North America > United States > California > Alameda County > Berkeley (0.05)
Big AI investor warns what's on the horizon will affect every human being
A big investor in artificial intelligence has dropped a warning about what's on the horizon in terms of AI development and how it will affect everyone. A new op-ed published in the Financial Times details a recounted conversation between AI investor Ian Hogarth and a machine learning researcher that told Hogarth that engineers have already reached the point where they are close to developing Artificial General Intelligence (AGI). The machine learning researcher said, "from now onwards," we are nearing AGI, and for those that don't know, AGI can be simply understood as creating an AI that is capable of performing or learning how to perform every task that a human can do. Despite these warnings from the machine learning researcher, Hogarth admits that these views aren't widely held and that estimations for when humans will achieve AGI vary from a decade to more than half a century. Hogarth recounted the conversation with the machine learning researcher and said that if what he was saying is true and that humans could be on the verge of creating something that is truly dangerous, doesn't he have a responsibility to warn people?
Should We Create An 'Island' For God-Like Artificial Intelligence?
Welcome to Fantasy Island, the future location for artificial intelligence research if some AI ... [ ] skeptics get their way. In a recent Financial Times essay, artificial intelligence investor Ian Hogarth made the case for slowing down what he calls "God-like AI." In his words, this is "a superintelligent computer that learns and develops autonomously, that understands its environment without the need for supervision and that can transform the world around it." Hogarth argues for taking a cautious approach to the development of such an artificial general intelligence (AGI) and also advocates for strict regulations. Specifically, he calls for creating a regulatory regime modeled after the FDA's pre-market approval process for drugs, but this would only apply to "narrowly useful AI systems."
Big tech hasn't monopolized A.I. software, but Nvidia dominates A.I. hardware
I recently caught up with Ian Hogarth and Nathan Benaich, who each year produce The State of AI Report, a must-read snapshot of how commercial applications of A.I. are evolving. Benaich is the founder of Air Street Capital, a solo venture capital fund that is one of the savviest early-stage investors in A.I.-based startups I know. Hogarth is the former co-founder of concert discovery app Songkick and has since go on to become a prominent angel investor as well one of the founders behind the founder-lead European venture capital platform Plural. There's always a lot to digest in their report. But one of the key takeaways from this year's State of AI is that concerns established tech giants and their affiliated A.I. research labs would monopolize the development of A.I. have been proven, if not exactly wrong, then at least premature. While it is true that Alphabet (which has both Google Brain and Deepmind in its stable), Meta, Microsoft, and OpenAI (which is closely partnered now with Microsoft) are building large "foundational models" for natural language processing and image and video generation, they are hardly the only players in the game.
- Oceania > Australia (0.04)
- North America > United States > New York (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- (2 more...)
- Banking & Finance > Capital Markets (0.70)
- Banking & Finance > Trading (0.49)
- Media > News (0.47)
- (2 more...)