Goto

Collaborating Authors

 sunglasses


Oakley Meta Vanguard Review: The Do-It-All Smart Glasses

WIRED

Meta's second collaboration with Oakley look good, sound great, and might just replace your action cam. All products featured on WIRED are independently selected by our editors. However, when you buy something through our retail links, we may earn an affiliate commission. I can hear music when I'm biking! They are workout headphones, sunglasses, an action camera.




Unpacking Let Alone: Human-Scale Models Generalize to a Rare Construction in Form but not Meaning

Scivetti, Wesley, Aoyama, Tatsuya, Wilcox, Ethan, Schneider, Nathan

arXiv.org Artificial Intelligence

Humans have a remarkable ability to acquire and understand grammatical phenomena that are seen rarely, if ever, during childhood. Recent evidence suggests that language models with human-scale pretraining data may possess a similar ability by generalizing from frequent to rare constructions. However, it remains an open question how widespread this generalization ability is, and to what extent this knowledge extends to meanings of rare constructions, as opposed to just their forms. We fill this gap by testing human-scale transformer language models on their knowledge of both the form and meaning of the (rare and quirky) English LET-ALONE construction. To evaluate our LMs we construct a bespoke synthetic benchmark that targets syntactic and semantic properties of the construction. We find that human-scale LMs are sensitive to form, even when related constructions are filtered from the dataset. However, human-scale LMs do not make correct generalizations about LET-ALONE's meaning. These results point to an asymmetry in the current architectures' sample efficiency between language form and meaning, something which is not present in human language learners.


The biggest dating app photo turn-offs (and no, it's not holding a fish)

Daily Mail - Science & tech

Choosing what pictures to include in your online dating profile is a big deal. Most people want to present a decent mix of flattering, fun and relaxed photos that showcase the best of you. But there are some in particular that should be avoided at all costs, experts say. A team from dating app Wisp asked 1,200 people for their biggest photo red flags that make them swipe left. The survey revealed 83 per cent of singles judge profiles on photos before reading a single word of your personal bio.


Are you In or Out (of gallery)? Wisdom from the Same-Identity Crowd

Bhatta, Aman, Dhakal, Maria, King, Michael C., Bowyer, Kevin W.

arXiv.org Artificial Intelligence

A central problem in one-to-many facial identification is that the person in the probe image may or may not have enrolled image(s) in the gallery; that is, may be In-gallery or Out-of-gallery. Past approaches to detect when a rank-one result is Out-of-gallery have mostly focused on finding a suitable threshold on the similarity score. W e take a new approach, using the additional enrolled images of the identity with the rank-one result to predict if the rank-one result is In-gallery / Out-of-gallery. Given a gallery of identities and images, we generate In-gallery and Out-of-gallery training data by extracting the ranks of additional enrolled images corresponding to the rank-one identity. W e then train a classifier to utilize this feature vector to predict whether a rank-one result is In-gallery or Out-of-gallery. Using two different datasets and four different matchers, we present experimental results showing that our approach is viable for mugshot quality probe images, and also, importantly, for probes degraded by blur, reduced resolution, atmospheric turbulence and sunglasses. W e also analyze results across demographic groups, and show that In-gallery / Out-of-gallery classification accuracy is similar across demographics. Our approach has the potential to provide an objective estimate of whether a one-to-many facial identification is Out-of-gallery, and thereby to reduce false positive identifications, wrongful arrests, and wasted investigative time. Interestingly, comparing the results of older deep CNN-based face matchers with newer ones suggests that the effectiveness of our Out-of-gallery detection approach emerges only with matchers trained using advanced margin-based loss functions.


Identifying Physically Realizable Triggers for Backdoored Face Recognition Networks

Raj, Ankita, Pal, Ambar, Arora, Chetan

arXiv.org Artificial Intelligence

Backdoor attacks embed a hidden functionality into deep neural networks, causing the network to display anomalous behavior when activated by a predetermined pattern in the input Trigger, while behaving well otherwise on public test data. Recent works have shown that backdoored face recognition (FR) systems can respond to natural-looking triggers like a particular pair of sunglasses. Such attacks pose a serious threat to the applicability of FR systems in high-security applications. We propose a novel technique to (1) detect whether an FR network is compromised with a natural, physically realizable trigger, and (2) identify such triggers given a compromised network. We demonstrate the effectiveness of our methods with a compromised FR network, where we are able to identify the trigger (e.g., green sunglasses or red hat) with a top-5 accuracy of 74%, whereas a naive brute force baseline achieves 56% accuracy.


Replace your sunglasses with this rare deal on Ray-Ban Meta smart glasses during this rare Amazon deal

Popular Science

If you've been wanting a pair of Ray-Ban Meta smartglasses, this is the best price I've seen since last year's Black Friday. Amazon has pairs as low as 239 right now, both with and without tinted lenses. They offer the classic Wayfarer style, so they look good on just about everyone. The sale is limited to what's in stock right now, so grab the color and the size you want before they sell out. When talking about the Ray-Ban Meta glasses, most people focus on the built-in camera.


Everything Announced at Meta Connect 2024: Quest 3S, Orion AR glasses and Meta AI updates

Engadget

Although Meta Connect 2024 lacked a marquee high-end product for the holiday season, it still included a new budget VR headset and a tease of the "magic glasses" Meta's XR gurus have been talking about for the better part of a decade. In addition, the company keeps plowing forward with new AI tools for its Ray-Ban glasses and social platforms. Here's everything the company announced at Meta Connect 2024. Today's best mixed reality gear -- like Apple's Vision Pro and the Meta Quest 3 -- are headsets with passthrough video capabilities. But the tech industry eventually wants to squeeze that tech into something resembling a pair of prescription glasses.


Digital Avatars: Framework Development and Their Evaluation

Rupprecht, Timothy, Chang, Sung-En, Wu, Yushu, Lu, Lei, Nan, Enfu, Li, Chih-hsiang, Lai, Caiyue, Li, Zhimin, Hu, Zhijun, He, Yumei, Kaeli, David, Wang, Yanzhi

arXiv.org Artificial Intelligence

We present a novel prompting strategy for artificial intelligence driven digital avatars. To better quantify how our prompting strategy affects anthropomorphic features like humor, authenticity, and favorability we present Crowd Vote - an adaptation of Crowd Score that allows for judges to elect a large language model (LLM) candidate over competitors answering the same or similar prompts. To visualize the responses of our LLM, and the effectiveness of our prompting strategy we propose an end-to-end framework for creating high-fidelity artificial intelligence (AI) driven digital avatars. This pipeline effectively captures an individual's essence for interaction and our streaming algorithm delivers a high-quality digital avatar with real-time audio-video streaming from server to mobile device. Both our visualization tool, and our Crowd Vote metrics demonstrate our AI driven digital avatars have state-of-the-art humor, authenticity, and favorability outperforming all competitors and baselines. In the case of our Donald Trump and Joe Biden avatars, their authenticity and favorability are rated higher than even their real-world equivalents.