Sam Altman, the CEO of artificial intelligence lab OpenAI, told a Senate panel he welcomes federal regulation on the technology'to mitigate' its risks. A stranger in a coffee shop can watch you and learn virtually everything about you, where you've been and even predict your movements "with greater ease and precision than ever before," experts say. All the user would need is a photo and advanced artificial intelligence technology that already exists, said Kevin Baragona, a founder of DeepAI.org. "There are services online that can use a photo of you, and I can find everything. Every instance of your face on the internet, every place you've been and use that for stalker-type purposes," Baragona told Fox News Digital.
With the hype around AI reaching a fever pitch in recent months, many people fear programs like ChatGPT will one day put them out of a job. For one New York lawyer, that nightmare could become a reality sooner than expected, but not for the reasons you might think. As reported by The New York Times, attorney Steven Schwartz of the law firm Levidow, Levidow and Oberman recently turned to OpenAI's chatbot for assistance with writing a legal brief, with predictably disastrous results. A lawyer used ChatGPT to do "legal research" and cited a number of nonexistent cases in a filing, and is now in a lot of trouble with the judge pic.twitter.com/AJSE7Ts7W7 Schwartz's firm has been suing the Columbian airline Avianca on behalf of Roberto Mata, who claims he was injured on a flight to John F. Kennedy International Airport in New York City.
The waters in Venice's main canal have turned fluorescent green in the area near Italy's renowned Rialto Bridge, as authorities seek to determine the cause. Italy's fire department posted a video on Sunday as one of its boats sailed on phosphorescent waters. "The Grand Canal coloured in green is what the fire department found this morning as we intervened together with ARPAV to collect samples and analyse this abnormal colour," it said. ARPAV, Veneto's regional environmental protection agency, said it received samples of the altered waters and was working to identify the substance that changed their colour. The Venice prefect has called an emergency meeting of police forces to understand what happened and study possible countermeasures, the ANSA news agency reported.
Lawyer Steven Schwartz of Levidow, Levidow & Oberman has been practicing law for three decades. Now, one case can completely derail his entire career. He relied on ChatGPT in his legal filings(opens in a new tab) and the AI chatbot completely manufactured previous cases, which Schwartz cited, out of thin air. It all starts with the case in question, Mata v. Avianca. According to the New York Times(opens in a new tab), an Avianca(opens in a new tab) customer named Roberto Mata was suing the airline after a serving cart injured his knee during a flight.
Cheered by the news that OpenAI, the company behind ChatGPT, had released a free iPhone app for the language model, I went to the Apple app store to download it, only to find that it was nowhere to be found. This is because – as I belatedly discovered – it's currently only available via the US app store and will be rolled out to other jurisdictions in due course. Despite that, though, the UK store was positively groaning with "ChatGPT" apps – of which I counted 25 before losing the will to live. For example, there's AI Chat – Chatbot AI Assistant ("Experience the power of AI! Create Essays, Emails, Resumes or Any Text!"). Or Chat AI – Ask Open Chatbot ("The ultimate AI chat app that can assist you with anything and everything you need")?
Google has just been hit with a $32.5 million penalty for infringing on a patent held by Sonos. According to Law360, a California federal jury ordered the fine after determining that Google infringed on a patent Sonos holds relating to grouping speakers so they can play audio at the same time, something the company has been doing for years. US District Judge William Alsup had already determined that early version of products like the Chromecast Audio and Google Home infringed on Sonos' patent; the question was whether more recent, revamped products were also infringing on the patent. The jury found in favor of Sonos, but decided a second patent -- one that relates to controlling devices via a smartphone or other device -- wasn't violated. They said that Sonos hadn't convincingly shown that the Google Home app infringed on that particular patent.
Elizabeth Holmes convinced investors and patients that she had a prototype of a microsampling machine that could run a wide range of relatively accurate tests using a fraction of the volume of blood usually required. She lied; the Edison and miniLab devices didn't work. Worse still, the company was aware they didn't work, but continued to give patients inaccurate information about their health, including telling healthy pregnant women that they were having miscarriages and producing false positives on cancer and HIV screenings. But Holmes, who has to report to prison by May 30, was convicted of defrauding investors; she wasn't convicted of defrauding patients. This is because the principles of ethics for disclosure to investors, and the legal mechanisms used to take action against fraudsters like Holmes, are well developed.
James Phillips is a weirdo and a misfit. At least, he was one of those who responded to a request by Dominic Cummings, Boris Johnson's former chief of staff, for exactly such people to work in No 10. Phillips worked as a technology adviser in Downing Street for two and a half years, during which time he became increasingly concerned that ministers were not paying enough attention to the risks posed by the fast-moving world of artificial intelligence. "We are still not talking enough about how dangerous these things could be," says Phillips, who left government last year when Johnson quit. "The level of concern in government has not yet reached the level of concern that exists in private within the industry."
As the artificial intelligence frenzy builds, a sudden consensus has formed. While there's a very real question whether this is like closing the barn door after the robotic horses have fled, not only government types but also people who build AI systems are suggesting that some new laws might be helpful in stopping the technology from going bad. The idea is to keep the algorithms in the loyal-partner-to-humanity lane, with no access to the I-am-your-overlord lane. Though since the dawn of ChatGPT many in the technology world have suggested that legal guardrails might be a good idea, the most emphatic plea came from AI's most influential avatar of the moment, OpenAI CEO Sam Altman. "I think if this technology goes wrong, it can go quite wrong," he said in a much anticipated appearance before a US Senate Judiciary subcommittee earlier this month. "We want to work with the government to prevent that from happening."
Texas residents share how familiar they are with artificial intelligence on a scale from one to 10 and detailed how much they use it each day. The "Godfather of A.I.," Geoffrey Hinton, quit Google out of fear that his former employer intends to deploy artificial intelligence in ways that will harm human beings. "It is hard to see how you can prevent the bad actors from using it for bad things," Hinton recently told The New York Times. But stomping out the door does nothing to atone for his own actions, and it certainly does nothing to protect conservatives – who are the primary target of A.I. programmers – from being canceled. Here are five things to know as the battle over A.I. turns hot: Elon Musk recently revealed that Google co-founder Larry Page and other Silicon Valley leaders want AI to establish a "digital god" that "would understand everything in the world.