After years of inaction in the U.S. Congress, E.U. tech laws have had wide-ranging implications for Silicon Valley companies. Europe's digital privacy law, the General Data Protection Regulation, has prompted some companies, such as Microsoft, to overhaul how they handle users' data even beyond Europe's borders. Meta, Google and other companies have faced fines under the law, and Google had to delay the launch of its generative AI chatbot Bard in the region due to a review under the law. However, there are concerns that the law created costly compliance measures that have hampered small businesses, and that lengthy investigations and relatively small fines have blunted its efficacy among the world's largest companies.
People familiar with the talks who spoke on the condition of the anonymity to describe delicate negotiations said France appeared to be the strongest obstacle to a deal, based in part on its desire to protect a burgeoning company developing AI foundation models: Paris-based Mistral, as well as other French AI firms. A bid to limit AI in police work, meanwhile, comes as France is set to deploy AI-powered smart cameras for policing and security at the 2024 Summer Olympics and as French cities have already entered legal gray areas by deploying or testing such technology.
Two leaders of Meta's Fundamental AI Research team, Yann LeCun and Joelle Pineau, also hold positions at New York University and McGill, respectively. Geoffrey Hinton, often called the "godfather of AI," taught at the University of Toronto while serving as Google's top AI expert. Hinton said that he worked for Google only half-time for 10 years and that his university appointment "was mainly advising graduate students on theses they had already started." LeCun and Pineau did not respond to requests for comment.
Facebook owner Meta has been an AI player for years, hiring some of the field's smartest researchers and using the tech to help decide which of its users should see certain advertisements. In July, it doubled down on a very different approach to AI than its Big Tech rivals. It announced that Llama 2, its GPT4 competitor, would be "open source" -- available for anyone to download, modify and add to their own products for free. The approach won Meta plaudits from tech start-ups who were worried that Google, Microsoft and OpenAI would try to corner the market for advanced AI and squeeze out any competitors. But it's also been criticized for making it easier for people to use AI for malicious purposes.
Camera filters that artificially age you have been around for years -- Snapchat and FaceApp both had popular versions in 2019. But advancements in AI imaging are making the results more realistic, perhaps by using machine learning trained on young and old images of real faces. Board certified dermatologist Aleksandra Brown said the TikTok time travel filter is the most accurate she's seen in predicting how a given face would age, including details like skin texture and muscle positions.
Rosário said the chatbot processed a 250-character command and took some 15 seconds to employ its algorithmic magic and spit out a policy -- a process that would normally take him about three days. The result, he said, showcased how artificial intelligence can be a useful tool for optimizing and improving public service. Yet Brazil's first ChatGPT-crafted law has launched the South American nation into a debate ringing across the globe: As artificial intelligence takes the world by storm, is society gearing toward a future where automation replaces humans?
LinkedIn began rolling out a generative AI feature to select users this spring, powered by OpenAI's GPT-4 model, to help premium subscribers write headlines and "about" sections. Users can generate text summarizing what's already in their profile and get spruced-up suggestions offered by the feature, which is highlighted with a gold button that says "write with AI." The capability is available to all of LinkedIn's millions of premium subscribers, and the company said it's exploring expanding access in the future.
Altman was fired from OpenAI on Nov. 17, kicking off a chaotic five days as the tech industry grappled with the implications of the face of the AI revolution being unceremoniously removed from his company. Five days later, Altman was back, a new board had been appointed, consisting of Taylor, former treasury secretary Larry Summers and Quora CEO Adam D'Angelo, one of the previous board members who had removed Altman. Since then, Silicon Valley has speculated about who else would join the board and ultimately control the fate of the company.