AI-Alerts
OpenAI Looks for Its iPhone Moment With Custom GPT Chatbot Apps - CNET
OpenAI, the company whose ChatGPT brought AI chatbots to mainstream awareness, said Monday that it'll let you build special-purpose AI apps using its technology. And with a new app store coming that'll let you find or share these GPTs, as the company is calling these customized artificial intelligence tools, OpenAI looks like it's hoping to have something an iPhone moment. You don't need to know how to program to make a new GPT. You have to give it plain-language instructions, upload some of your own knowledge in the form of PDFs, videos or other files, then steer the bot's purpose in a direction like creating images or searching the web. "GPTs are tailored versions of ChatGPT for a specific purpose," OpenAI Chief Executive Sam Altman said at the OpenAI DevDay conference in San Francisco.
UK AI summit: G7 countries agree AI code of conduct
This week, UK prime minister Rishi Sunak is hosting a group of 100 representatives from the worlds of business and politics to discuss the potential and pitfalls of artificial intelligence. The AI Safety Summit, held at Bletchley Park, UK, begins on 1 November and aims to come up with a set of global principles with which to develop and deploy "frontier AI models" โ the terminology favoured by Sunak and key figures in the AI industry for powerful models that don't yet exist, but may be built very soon. While the Bletchley Park event is the focal point, there is a wider week of fringe events being held in the UK, alongside a raft of UK government announcements on AI. Here are the latest developments. The global community has decided that the week of the UK summit is a ripe time to announce their own AI developments.
Major UK retailers urged to quit 'authoritarian' police facial recognition strategy
Some of Britain's biggest retailers, including Tesco, John Lewis and Sainsbury's, have been urged to pull out of a new policing strategy amid warnings it risks wrongly criminalising people of colour, women and LGBTQ people. A coalition of 14 human rights groups has written to the main retailers โ also including Marks & Spencer, the Co-op, Next, Boots and Primark โ saying that their participation in a new government-backed scheme that relies heavily on facial recognition technology to combat shoplifting will "amplify existing inequalities in the criminal justice system". The letter, from Liberty, Amnesty International and Big Brother Watch, among others, questions the unchecked rollout of a technology that has provoked fierce criticism over its impact on privacy and human rights at a time when the European Union is seeking to ban the technology in public spaces through proposed legislation. "Facial recognition technology notoriously misidentifies people of colour, women and LGBTQ people, meaning that already marginalised groups are more likely to be subject to an invasive stop by police, or at increased risk of physical surveillance, monitoring and harassment by workers in your stores," the letter states.Its authors also express dismay that the move will "reverse steps" that big retailers introduced during the Black Lives Matter movement, including high-profile commitments to be champions of diversity, equality and inclusion. Meanwhile, concerns over the broadening use of facial recognition technology have further intensified after the emergence of details of a police watchlist used to justify the contentious decision to use biometric surveillance at July's Formula One British Grand Prix at Silverstone.
Rishi Sunak says AI has threats and risks - but outlines its potential
Prof Carissa Veliz, associate professor in philosophy, Institute of Ethics in AI, at the University of Oxford, said unlike the EU the UK had so far been "notoriously averse to regulating AI, so it is interesting for Sunak to say that the UK is particularly well-suited to lead the efforts of ensuring the safety of AI".
ChatGPT wrote code that can make databases leak sensitive information
A vulnerability in Open AI's ChatGPT โ now fixed โ could have been used by malicious actors Researchers manipulated ChatGPT and five other commercial AI tools to create malicious code that could leak sensitive information from online databases, delete critical data or disrupt database cloud services in a first-of-its-kind demonstration. The work has already led the companies responsible for some of the AI tools โ including Baidu and OpenAI โ to implement changes to prevent malicious users from taking advantage of the vulnerabilities. "It's the very first study to demonstrate that vulnerabilities of large language models in general can be exploited as an attack path to online commercial applications," says Xutan Peng, who co-led the study while at the University of Sheffield in the UK. Peng and his colleagues looked at six AI services that can translate human questions into the SQL programming language, which is commonly used to query computer databases. "Text-to-SQL" systems that rely on AI have become increasingly popular โ even standalone AI chatbots, such as OpenAI's ChatGPT, can generate SQL code that can be plugged into such databases.
California hits pause on GM Cruise self-driving cars due to safety concerns
The United States state of California has suspended testing of Cruise self-driving cars developed by General Motors (GM), citing safety concerns after a series of accidents and mishaps. California's Department of Motor Vehicles (DMV) announced on Tuesday that it had suspended the deployment of GM self-driving vehicles and driverless testing permits, the latest regulatory agency to express concerns over their safety. "When there is an unreasonable risk to public safety, the DMV can immediately suspend or revoke permits," the department said in response to an inquiry from the news outlet AFP. Self-driving cars have been met with mixed reactions from the public, some of whom see them as an exciting technological development while others view them as a nuisance or a hazard. The suspension follows a series of accidents involving Cruise vehicles and marks a serious setback for GM's efforts to break into the autonomous vehicle industry.
Cloud Growth Powers Microsoft Above Expectations
Microsoft on Tuesday reported strong sales in its latest quarter, showing that its corporate customers have been shaking off jitters about spending heavily in the uncertain economy. The results also showed early signs that the company's investments in generative artificial intelligence were beginning to bolster sales, most notably reversing what had been slowing growth of the company's important cloud computing product. The company had $56.5 billion in sales in the three months that ended in September, up 13 percent from a year earlier. Profit hit $22.3 billion, up 27 percent. The results beat analyst expectations and Microsoft's own estimates.
Machine Learning Sensors
The last decade has seen a surge in commercial applications using machine learning (ML). Similarly, marked improvements in latency and bandwidth of wireless communication have led to the rapid adoption of cloud-connected devices, which gained the moniker Internet of Things (IoT). With such technology, it became possible to add intelligence to sensor systems and devices, enabling new technologies such as Amazon Echo, Google Nest, and other so-called "smart devices." However, these devices offer only the illusion of intelligence and are merely vessels for submitting and receiving queries from a centralized cloud infrastructure. This cloud processing leads to concerns about where user data is being stored, what other services it might be used for, and who has access to it.7 More recently, efforts have progressed in dovetailing the domains of IoT and machine learning to embed intelligence directly on the device, known as tiny machine learning (TinyML).10
Legal Challenges to Generative AI, Part II
DALL-E, Midjourney, and Stable Diffusion are among the generative AI technologies widely used to produce images in response to user prompts. The output images are, for the most part, indistinguishable from images humans might have created. Generative AI systems are capable of producing human-creator-like images because of the extremely large quantities of images, paired with textual descriptions of the images' contents, on which the systems' image models were trained. A text prompt to compose a picture of a dog playing with a ball on a beach at sunset will generate a responsive image drawing upon embedded representations of how dogs, balls, beaches, and sunsets are typically depicted and arranged in images of this sort.