"We are so back," posted OpenAI CEO Sam Altman. In other words, ChatGPT can now browse the internet again. A week after the ChatGPT creator launched a Browse with Bing plugin, the feature was deactivated after it was discovered that users could access paywalled content. Now, internet browsing is back, meaning users can get real-time information past the September 2021 cutoff of data that was used to train the model. OpenAI posted the announcement on X (formerly known as Twitter).
We've already seen OpenAI and Salesforce incorporate their standalone chatbots into larger, more comprehensive machine learning platforms that span the breadth and depth of their businesses. On Tuesday, Google announced that its Bard AI is receiving the same treatment and has been empowered to pull real-time data from other Google applications including Docs, Maps, Lens, Flights, Hotels and YouTube, as well as the users' own silo of stored personal data, to provide more relevant and actionable chatbot responses. "I've had the great fortune of being a part of the team from the inception," Jack Krawczyk,bproduct lead for Bard, told Engadget. "This Thursday marks six months since Bard entered into the world." But despite of the technology's rapid spread, Krawczyk concedes that many users remain wary of it, either because they don't see an immediate use-case for it in their personal lives or "some others are saying, 'I've also heard that it makes things up a lot.'"
Winnow CEO Marc Zornes and Iberostar Group's Dr. Megan Morikawa discuss how artificial intelligence can target food waste in commercial kitchens -- and improve both business efficiency and global sustainability. Food waste makes up an estimated 30% to 40% of the food supply, according to the U.S. Department of Agriculture -- and now a London company is using artificial intelligence in an attempt to address the problem. Winnow, a food waste solution company, has developed an AI-powered system that aims to reduce food waste in commercial kitchens worldwide. CEO Marc Zornes said the company's tech can measure the foods that get tossed daily using machine learning and a camera. "We use computer vision to identify what's being wasted in real time, literally as the food's being thrown away," he told Fox News Digital in an interview.
Our group has been developing and operating a platform to acquire, archive, and manage various data related to the Earth's environment to make it available to researchers across a wide range of fields. Development of this system began in the 1980s to receive, archive, and distribute Asian satellite image data. Currently, the system covers a variety of data including weather, climate change, disaster prevention, biodiversity, health, and agriculture. Today, the Data Integration and Analysis System (DIASa) is a large-scale analysis platform with huge storage and more than 10,000 registered users (half of them in Japan and the other half primarily in Asia). As shown in the accompanying figure, users can easily use data collected by the common collection API through the common use API, and they can operate services at the application layer.
Real-time feature computation, which calculates features from raw data on demand, is a crucial component in the machine learning (ML) application process. These real-time features are vital for various real-world ML applications, such as anti-fraud management, risk control, and personalized recommendations. In these cases, low latency (milliseconds) in computing fresh data features is crucial for accurate and high-quality online inference. As illustrated in the accompanying figure, a data scientist typically begins an ML application by developing feature computation scripts (for example, using Python or SparkSQL) for offline training. However, these scripts cannot meet the demands of online serving, including low latency, high throughput, and high availability. Hence, it is necessary to transform these scripts into performance-optimized code (for example, using C) that can be developed by an engineering team with system and production knowledge.
Apple has finally thrown its hat into the VR ring with the announcement of the Apple Vision Pro. This VR headset is designed to not only deliver more immersive entertainment experiences but also boost productivity with virtual desktop apps as well as the newly announced digital Persona. With the front camera on the Vision Pro headset, a user will be able to scan their face in order to create an almost 1:1 virtual reconstruction -- aka Persona -- of their likeness. Not only will this avatar be more visually accurate, but it will also be animated in real-time to match your mouth and hand movements for more natural-looking conversations. The Digital Persona can be integrated with the FaceTime app for VisionOS to make collaboration with non-VR-using teammates smoother.
When the epic open-world PlayStation 4 game Red Dead Redemption 2 was developed in 2013, it took 2,200 days to record the 1,200 voices in the game with 700 voice actors, who recited the 500,000 lines of dialogue. It was a massive feat that is nearly impossible for any other studio to replicate – let alone a games studio smaller than Rockstar Games. But with advances in artificial intelligence it is becoming easier and easier to recreate human voices to create automated real-time responses, near limitless dialogue options and speech tailored to a user's unique input. But the technology raises questions about the ethics of synthesising voices. The Australian software developer Replica Studios rolled out a voice synthesiser platform for games developers in 2019 – a tool used by Australian games developer PlaySide Studios in their game Age of Darkness: Final Stand.
Energetic bear cubs play with rehabilitation staff after arriving at a wildlife center. African conservationists are hoping that artificial intelligence (AI) powered cameras could help aid in the protection of endangered species, such as the forest and savannah elephants. "We must urgently put an end to poaching and ensure that sufficient suitable habitat for both forest and savanna elephants is conserved," Dr. Bruno Oberle, Director-General of the International Union for the Conservation of Nature (IUCN), said when discussing the potential new technology. The cameras, developed in collaboration between Dutch tech start-up Hack the Planet and British scientists at Stirling University, will be able to detect different animal species and humans in real time and provide live alerts to local villages and rangers, Stirling wrote in a press release. A pilot test of the tech, which works with satellites and a range of networks including Wi-Fi, long-rage radio and cellular coverage, immediately labeled images and sent out warnings calling for help.
Mind-reading technology can now transcribe people's thoughts in real-time based on the blood flow in their brain. A study put three people in MRI machines and got them to listen to stories. For the first time, researchers claim, they produced a rolling text of people's thoughts, and not just single words or sentences, without using a brain implant. The mind-reading technology did not exactly replicate the stories, but captured the main points. The breakthrough raises concerns about'mental privacy' as it could be the first step in being able to eavesdrop on others' thoughts.
Unlike satellite images, which are typically acquired and processed in near-real-time, global land cover products have historically been produced on an annual basis, often with substantial lag times between image processing and dataset release. We developed a new automated approach for globally consistent, high resolution, near real-time (NRT) land use land cover (LULC) classification leveraging deep learning on 10 m Sentinel-2 imagery. We utilize a highly scalable cloud-based system to apply this approach and provide an open, continuous feed of LULC predictions in parallel with Sentinel-2 acquisitions. This first-of-its-kind NRT product, which we collectively refer to as Dynamic World, accommodates a variety of user needs ranging from extremely up-to-date LULC data to custom global composites representing user-specified date ranges. Furthermore, the continuous nature of the product’s outputs enables refinement, extension, and even redefinition of the LULC classification. In combination, these unique attributes enable unprecedented flexibility for a diverse community of users across a variety of disciplines.