The models are updated using a CNN, which ensures robustness to noise, scaling and minor variations of the targets' appearance. As with many other related approaches, an online implementation offloads most of the processing to an external server leaving the embedded device from the vehicle to carry out only minor and frequently-needed tasks. Since quick reactions of the system are crucial for proper and safe vehicle operation, performance and a rapid response of the underlying software is essential, which is why the online approach is popular in this field. Also in the context of ensuring robustness and stability, some authors apply fusion techniques to information extracted from CNN layers. It has been previously mentioned that important correlations can be drawn from deep and shallow layers which can be exploited together for identifying robust features in the data.
The Image Recognition Technology Is, Usually, Associated with an Array of Security and Surveillance-Related Uses and the Rapidly Developing Autonomous Vehicle Niche. Can Image Recognition Apps Help Businesses in Other Verticals? With Reuters' predictions for the not-so-far-off year of 2022 being in the region of a hefty $43-57 billion, Image Recognition is one big lure for AI outfits, and, simultaneously, a lot of hope for businesses and organizations that depend upon it for their survival and success. These include entities as diverse, as manufacturers of autonomous cars and security systems, national nature parks, border security forces, and companies that produce drones. Be it monitoring the state of a much cherished rainforest or sending drones to remote oil rigs to check if all one's assets are in one piece, almost all of the widely known uses of Image Recognition seem to be related to security and surveillance.
The image recognition technology used in today's autonomous cars and aerial drones as well as tomorrow's cancer-seeking robotic medical devices, all depend on artificial intelligence. These "computers that see" teach themselves to recognize objects -- a dog, a pedestrian crossing the street, a stopped car or a cancer tumor. Now, researchers at Stanford University have devised a new type of camera system that can classify images faster and more energy efficiently, and that could one day be built small enough to be embedded in the devices themselves, something that is not possible today. "That autonomous car you just passed has a relatively huge, relatively slow, energy intensive computer in its trunk," says Gordon Wetzstein, an assistant professor of electrical engineering and (by courtesy) computer science at Stanford, who directed the research. Wetzstein and Julie Chang, a doctoral candidate in his lab and first author on the paper, have married two types of computers into one -- creating a hybrid optical-electrical computer designed specifically for image analysis.
In the last five years, edge computing has attracted tremendous attention from industry and academia due to its promise to reduce latency, save bandwidth, improve availability, and protect data privacy to keep data secure. At the same time, we have witnessed the proliferation of AI algorithms and models which accelerate the successful deployment of intelligence mainly in cloud services. These two trends, combined together, have created a new horizon: Edge Intelligence (EI). The development of EI requires much attention from both the computer systems research community and the AI community to meet these demands. However, existing computing techniques used in the cloud are not applicable to edge computing directly due to the diversity of computing sources and the distribution of data sources. We envision that there missing a framework that can be rapidly deployed on edge and enable edge AI capabilities. To address this challenge, in this paper we first present the definition and a systematic review of EI. Then, we introduce an Open Framework for Edge Intelligence (OpenEI), which is a lightweight software platform to equip edges with intelligent processing and data sharing capability. We analyze four fundamental EI techniques which are used to build OpenEI and identify several open problems based on potential research directions. Finally, four typical application scenarios enabled by OpenEI are presented.
Dozens of companies who fancy their technologies as a great fit for the automotive market are on the hunt for alliances. For example, Kyocera and FotoNation announced Tuesday (May 17) a partnership agreement to develop intelligent automotive camera technology -- deemed critical in the coming era of semi- and fully autonomous cars. Kyocera has already dabbled in the auto sector with its rearview camera modules. FotoNation, with the lion's share of computational imaging solutions for mobile phones, entered the driver monitoring system market last year. "The two companies' interests are aligned," Sumat Mehra, senior vice president of marketing and business development at FotoNation, told EE Times, "to expand each company's presence in the automotive market" -- well beyond what they offer today.