goggle
Antigravity A1 drone review: FPV flying unlike anything else
Unfortunately, there are some usability bumps. The Antigravity A1 is what happens when Insta360's 360-degree cameras are given wings and flying feels like a video game. Spinning out as its own brand, Antigravity's debut drone is a big swing: a three-piece set with a drone that captures 8K 360-degree video, FPV goggles and a motion controller. Challenging the dominance of DJI's (many!) consumer drones is a big ask. Antigravity's approach is to play to its strengths in 360-degree video and smartphone-first editing.
- Leisure & Entertainment > Games > Computer Games (0.35)
- Transportation > Air (0.35)
- Information Technology > Robotics & Automation (0.35)
Antigravity A1 Review: A 360-Degree Drone
The world's first 360-degree drone is fun all around, if you don't mind the steep price or wearing goggles to control it. As someone who has been reviewing camera drones for over a decade, it's rare for me to encounter one that feels genuinely new. While DJI's continual stream of steadily improving, ever-reliable drones almost always impresses, what Antigravity has done with its first-ever product, the A1, essentially invents an entirely novel subcategory: the 360 drone. Using the same shoot-first, frame-later technology as the Insta360 X5 (Antigravity is technically a distinct company from Insta360, but the brands have close ties), the A1 has twin cameras to capture everything around it, allowing the user to reframe the footage later using mobile or desktop apps. Each of the cameras uses a 1/1.28-inch sensor and an ultrawide lens to capture a hemispherical view.
- North America > United States > California (0.05)
- Europe > United Kingdom (0.05)
- Europe > Slovakia (0.05)
- Europe > Czechia (0.05)
A Deep Learning Approach to Detect Complete Safety Equipment For Construction Workers Based On YOLOv7
Islam, Md. Shariful, Shaqib, SM, Ramit, Shahriar Sultan, Khushbu, Shahrun Akter, Sattar, Mr. Abdus, Noori, Dr. Sheak Rashed Haider
In the construction sector, ensuring worker safety is of the utmost significance. In this study, a deep learning-based technique is presented for identifying safety gear worn by construction workers, such as helmets, goggles, jackets, gloves, and footwears. The recommended approach uses the YOLO v7 (You Only Look Once) object detection algorithm to precisely locate these safety items. The dataset utilized in this work consists of labeled images split into training, testing and validation sets. Each image has bounding box labels that indicate where the safety equipment is located within the image. The model is trained to identify and categorize the safety equipment based on the labeled dataset through an iterative training approach. We used custom dataset to train this model. Our trained model performed admirably well, with good precision, recall, and F1-score for safety equipment recognition. Also, the model's evaluation produced encouraging results, with a mAP@0.5 score of 87.7\%. The model performs effectively, making it possible to quickly identify safety equipment violations on building sites. A thorough evaluation of the outcomes reveals the model's advantages and points up potential areas for development. By offering an automatic and trustworthy method for safety equipment detection, this research makes a contribution to the fields of computer vision and workplace safety. The proposed deep learning-based approach will increase safety compliance and reduce the risk of accidents in the construction industry
How Realistic Is Your Synthetic Data? Constraining Deep Generative Models for Tabular Data
Stoian, Mihaela Cătălina, Dyrmishi, Salijona, Cordy, Maxime, Lukasiewicz, Thomas, Giunchiglia, Eleonora
Deep Generative Models (DGMs) have been shown to be powerful tools for generating tabular data, as they have been increasingly able to capture the complex distributions that characterize them. However, to generate realistic synthetic data, it is often not enough to have a good approximation of their distribution, as it also requires compliance with constraints that encode essential background knowledge on the problem at hand. In this paper, we address this limitation and show how DGMs for tabular data can be transformed into Constrained Deep Generative Models (C-DGMs), whose generated samples are guaranteed to be compliant with the given constraints. This is achieved by automatically parsing the constraints and transforming them into a Constraint Layer (CL) seamlessly integrated with the DGM. Our extensive experimental analysis with various DGMs and tasks reveals that standard DGMs often violate constraints, some exceeding $95\%$ non-compliance, while their corresponding C-DGMs are never non-compliant. Then, we quantitatively demonstrate that, at training time, C-DGMs are able to exploit the background knowledge expressed by the constraints to outperform their standard counterparts with up to $6.5\%$ improvement in utility and detection. Further, we show how our CL does not necessarily need to be integrated at training time, as it can be also used as a guardrail at inference time, still producing some improvements in the overall performance of the models. Finally, we show that our CL does not hinder the sample generation time of the models.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Austria > Vienna (0.04)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
The Apple Vision Pro Is Spectacular and Sad
I was alone in the wilderness, in the goggles wrapped around my head. Maybe the White Sands thunderhead was too foreboding. A storm above a desert--that was the last thing I needed while I felt like I was drowning. So I switched myself into the Mount Hood environment, with its lakeside vista, lined in evergreens, and its gentle chirp of birds.
Transparent TVs, AI catflaps: what were the tech standouts at CES 2024?
The next year in technology is to be dominated by upgrades for everything from catflaps to binoculars to cars, devices that disappear in your home including transparent televisions, plus a new era of spatial computing brought in by some very expensive goggles. Those are the predictions from the annual CES tech show in Las Vegas that drew to a close this week. Unlike previous years, the event was not dominated by the big technology and car firms but rather a record-breaking 1,400 startups displaying their prototypes in hopes of catching the eyes of consumers and investors alike. Despite myriad promises to the contrary, many of these novel gadgets may never make it to the shops. But all of them show how technology is progressing and give a glimpse of what's next.
- North America > United States > Nevada > Clark County > Las Vegas (0.25)
- Asia > South Korea (0.06)
- Transportation > Passenger (0.72)
- Automobiles & Trucks (0.50)
- Transportation > Ground > Road (0.49)
- Media (0.48)
AutoWS-Bench-101: Benchmarking Automated Weak Supervision with 100 Labels
Roberts, Nicholas, Li, Xintong, Huang, Tzu-Heng, Adila, Dyah, Schoenberg, Spencer, Liu, Cheng-Yu, Pick, Lauren, Ma, Haotian, Albarghouthi, Aws, Sala, Frederic
Weak supervision (WS) is a powerful method to build labeled datasets for training supervised models in the face of little-to-no labeled data. It replaces hand-labeling data with aggregating multiple noisy-but-cheap label estimates expressed by labeling functions (LFs). While it has been used successfully in many domains, weak supervision's application scope is limited by the difficulty of constructing labeling functions for domains with complex or high-dimensional features. To address this, a handful of methods have proposed automating the LF design process using a small set of ground truth labels. In this work, we introduce AutoWS-Bench-101: a framework for evaluating automated WS (AutoWS) techniques in challenging WS settings -- a set of diverse application domains on which it has been previously difficult or impossible to apply traditional WS techniques. While AutoWS is a promising direction toward expanding the application-scope of WS, the emergence of powerful methods such as zero-shot foundation models reveals the need to understand how AutoWS techniques compare or cooperate with modern zero-shot or few-shot learners. This informs the central question of AutoWS-Bench-101: given an initial set of 100 labels for each task, we ask whether a practitioner should use an AutoWS method to generate additional labels or use some simpler baseline, such as zero-shot predictions from a foundation model or supervised learning. We observe that in many settings, it is necessary for AutoWS methods to incorporate signal from foundation models if they are to outperform simple few-shot baselines, and AutoWS-Bench-101 promotes future research in this direction. We conclude with a thorough ablation study of AutoWS methods.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Wisconsin (0.04)
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- (2 more...)
The Age of Goggles Has Arrived
"Vision Pro feels familiar, yet it's entirely new." That's how Apple's CEO, Tim Cook, introduced the company's new computer goggles at the Apple Worldwide Developers Conference on Monday. The Vision Pro headset, which resembles a glass scuba mask with a fabric head strap, seamlessly blends the real and digital worlds, Cook said. But the product's name, which could just as easily describe a brand of contact-lens solution, hints at a challenge. Familiar yet entirely new, natural but augmented: If goggles really are the future of computing, they will have to overcome a bevy of conflicting sentiments. As you might expect, Apple's product is slick.
- North America > United States > New York > New York County > New York City (0.04)
- Asia > Middle East > Syria > Aleppo Governorate > Aleppo (0.04)
- Information Technology (0.69)
- Leisure & Entertainment (0.47)
Apple Vision Pro Hands On: The Opposite of Disappearing
Apple's long-awaited mixed-reality headset, the Vision Pro, is here. Or not yet here, but announced. In a crescendoed moment of its software conference keynote this morning, Apple executives revealed a pair of smart goggles that portend a post-iPhone world. I had a hands on (heads on?) demo of the Vision Pro headset earlier today, in a building constructed on Apple's campus specifically to house meetings around this new product. Apple executives declined to go on the record during the demo and subsequent briefing, but it was clear that Apple views Vision Pro as a spatial computing platform, not a singular device.
- Information Technology > Communications > Mobile (0.37)
- Information Technology > Artificial Intelligence > Vision (0.33)
New Training Data Labeling System for Machine Learning Helps Developers
Machine learning (ML) has become one of the most prominent forms of data analysis for everything from fraud detection to visual quality control. Yet the analytic results can often suffer from insufficiently labeled training data. A team of Georgia Tech researchers has created a system that allows users to more effectively label a training dataset with higher accuracy than current methods. "We are looking at the problem from a data management perspective," said School of Computer Science (SCS) Assistant Professor Xu Chu. "In contrast to a lot of ML research that tries to tackle the lack of sufficient training data from an ML algorithm design perspective, we aim at building a system that helps users effectively label a dataset."