recogni
New Capability to Look Up an ASL Sign from a Video Example
Neidle, Carol, Opoku, Augustine, Ballard, Carey, Zhou, Yang, He, Xiaoxiao, Dimitriadis, Gregory, Metaxas, Dimitris
Looking up an unknown sign in an ASL dictionary can be difficult. Most ASL dictionaries are organized based on English glosses, despite the fact that (1) there is no convention for assigning English-based glosses to ASL signs; and (2) there is no 1-1 correspondence between ASL signs and English words. Furthermore, what if the user does not know either the meaning of the target sign or its possible English translation(s)? Some ASL dictionaries enable searching through specification of articulatory properties, such as handshapes, locations, movement properties, etc. However, this is a cumbersome process and does not always result in successful lookup. Here we describe a new system, publicly shared on the Web, to enable lookup of a video of an ASL sign (e.g., a webcam recording or a clip from a continuous signing video). The user submits a video for analysis and is presented with the five most likely sign matches, in decreasing order of likelihood, so that the user can confirm the selection and then be taken to our ASLLRP Sign Bank entry for that sign. Furthermore, this video lookup is also integrated into our newest version of SignStream(R) software to facilitate linguistic annotation of ASL video data, enabling the user to directly look up a sign in the video being annotated, and, upon confirmation of the match, to directly enter into the annotation the gloss and features of that sign, greatly increasing the efficiency and consistency of linguistic annotations of ASL video data.
- North America > United States > New York > Monroe County > Rochester (0.14)
- Europe > Italy > Piedmont > Turin Province > Turin (0.04)
- Europe > Ireland > Leinster > County Dublin > Dublin (0.04)
- Europe > France > Provence-Alpes-Côte d'Azur > Bouches-du-Rhône > Marseille (0.04)
Computer Vision Estimation of Emotion Reaction Intensity in the Wild
Qian, Yang, Kargarandehkordi, Ali, Mutlu, Onur Cezmi, Surabhi, Saimourya, Honarmand, Mohammadmahdi, Wall, Dennis Paul, Washington, Peter
Emotions play an essential role in human communication. Developing computer vision models for automatic recognition of emotion expression can aid in a variety of domains, including robotics, digital behavioral healthcare, and media analytics. There are three types of emotional representations which are traditionally modeled in affective computing research: Action Units, Valence Arousal (VA), and Categorical Emotions. As part of an effort to move beyond these representations towards more fine-grained labels, we describe our submission to the newly introduced Emotional Reaction Intensity (ERI) Estimation challenge in the 5th competition for Affective Behavior Analysis in-the-Wild (ABAW). We developed four deep neural networks trained in the visual domain and a multimodal model trained with both visual and audio features to predict emotion reaction intensity. Our best performing model on the Hume-Reaction dataset achieved an average Pearson correlation coefficient of 0.4080 on the test set using a pre-trained ResNet50 model. This work provides a first step towards the development of production-grade models which predict emotion reaction intensities rather than discrete emotion categories.
- North America > United States > Hawaii (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
Conti puts its chips on AI start-up
Hanover, Germany – Continental has acquired a minority stake in Recogni, a German-US start-up working on a new chip architecture for AI-based object-recognition in real time. The San Jose, California-based tech firm's chips are intended for use in Continental's vehicle computers, for example to perform rapid processing of sensor data for automated and autonomous driving. As an investor – percentage stake not disclosed – the Hanover-based group is contributing financial support and expertise in the field of AI, vehicle sensors and advanced driver assistance systems to Recogn's chip design work. Continental said volume production featuring the new chip application could begin as early as 2026: the new processors serving as "ultra-economical data boosters: with minimal energy consumption." The development, it added, will enable vehicle computers to gain a rapid sense of the vehicle's immediate surroundings, thus creating the basis for automated and autonomous driving.
- North America > United States > California > Santa Clara County > San Jose (0.27)
- Europe > Germany > Lower Saxony > Hanover (0.27)
- Automobiles & Trucks (1.00)
- Information Technology (0.68)
This AI vision startup, Recogni, has its sights set on enabling fully autonomous vehicles
With a mission to design a novel vision-oriented artificial intelligence platform, this Silicon Valley-based startup, Recogni, is the only solution on the market that has the performance and efficiency necessary to enable fully autonomous vehicles (AVs). The company is headquartered in San Jose, California, with operations in Munich, Germany. It is backed by leading venture capital firms, including GreatPoint Ventures, Toyota AI Ventures, BMWi Ventures, Faurecia, Fluxunit – OSRAM Ventures, and DNS Capital. Recogni's system could deliver unprecedented inference performance through novel edge processing, allowing vehicles to see farther and make driving decisions faster than humans while consuming minimal amounts of energy. The auto industry is experiencing an evolution to vehicle autonomy Current solutions are based on repurposing legacy technology, constraining their performance and efficiency, rendering them ineffective with regards to enabling full vehicle autonomy.
- North America > United States > California > Santa Clara County > San Jose (0.28)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.28)
Autonomous Vehicles and the Consumer
Currently, society faces numerous transportation inefficiencies. In addition to this, the average motor vehicle accident can cost up to over a million dollars based on the severity. Through subsequent increased insurance and potential legal fees, consumers unmistakably bare a significant burden. Lastly, people on average spend several thousand dollars on fuel annually. Clearly, there must be a solution to optimize mobility for everyone.
Autonomous Vehicles and Last Mile Delivery
Last mile delivery (LMD) is a critical segment of the logistics industry. The size of LMD was valued at over 30 billion dollars in 2018, and will grow significantly through 2025, where it is projected to be worth over 60 billion dollars. However, LMD faces severe inefficiencies, costing firms a significant amount of revenue annually. This is a sponsored post written by Recogni. The opinions expressed in this article are the sponsor's own.
- Transportation > Freight & Logistics Services (0.57)
- Information Technology (0.41)
AI Startup Funded By BMW And Toyota Says Robotic Taxis Feasible In 2024
It's not just state and federal regulators' safety concerns that are preventing automotive manufacturers from producing self-driving cars for the masses. First they have to figure out how to make them more energy efficient. Opening the cargo area of a typical self-driving test vehicle usually reveals a trunk full of computers and wires needed to process petabytes of sensor data in real-time. That doesn't leave a lot of room for luggage, groceries, or anything else you usually transport in a car, not to mention the huge amounts of energy these systems suck from the batteries as they process all this information. That's why BMW I Ventures and Toyota AI Ventures has invested in Recogni, a San Jose, Calif.-based startup that is developing an artificial intelligence platform optimized for autonomous vehicles that can process information quickly while consuming very little energy.
- Automobiles & Trucks > Manufacturer (1.00)
- Transportation > Ground > Road (0.40)