Goto

Collaborating Authors

Vision


Neural Style Transfer

#artificialintelligence

Leon Gatys et al. introduced the Neural Style Transfer technique in 2015 in "A Neural Algorithm of Artistic Style". As stated earlier, Neural Style Transfer is a technique of composing images in the style of another image. Neural Style Transfer (NST) refers to a class of software algorithms that manipulate digital images or videos to adapt the appearance or visual style of another image. NST algorithms are characterized by their use of deep neural networks for the sake of image transformation. If you want to go deep into the original technique, you can refer to the paper from this link.


AI Fire Detection: Computer Vision Guards the Forest

#artificialintelligence

In the age of global warming, forest fires are becoming more frequent and faster-growing. Clearly, the world needs sustainable solutions to preserve our natural resources, protect human lives, and avoid economic devastation. As an environmental advocate and sustainability enthusiast, I got to thinking about whether a technological solution can help with this daunting task. Fortunately, I am also a computer scientist, one who is all too aware of how tedious and time-consuming research can be. In such times, I often choose to play my ace in the hole by going straight to Intel's rich ecosystem--the Intel Partner Alliance. Not surprisingly, it led me to an ingenious solution: the AAEON Intelligent Forest Fire Monitoring System (Figure 1).


Automate annotation of image training data with Amazon Rekognition

#artificialintelligence

Every machine learning (ML) model demands data to train it. If your model isn't predicting Titanic survival or iris species, then acquiring a dataset might be one of the most time-consuming parts of your model-building process--second only to data cleaning. What data cleaning looks like varies from dataset to dataset. For example, the following is a set of images tagged robin that you might want to use to train an image recognition model on bird species. That nest might count as dirty data, and some model applications may make it inappropriate to include American and European robins in the same category, but this seems pretty good so far.


Retinal waves prime visual motion detection by simulating future optic flow

Science

As a mouse runs forward across the forest floor, the scenery that it passes flows backwards. Ge et al. show that the developing mouse retina practices in advance for what the eyes must later process as the mouse moves. Spontaneous waves of retinal activity flow in the same pattern as would be produced days later by actual movement through the environment. This patterned, spontaneous activity refines the responsiveness of cells in the brain's superior colliculus, which receives neural signals from the retina to process directional information. Science , abd0830, this issue p. [eabd0830][1] ### INTRODUCTION Fundamental circuit features of the mouse visual system emerge before the onset of vision, allowing the mouse to perceive objects and detect visual motion immediately upon eye opening. How the mouse visual system achieves self-organization by the time of eye opening without structured external sensory input is not well understood. In the absence of sensory drive, the developing retina generates spontaneous activity in the form of propagating waves. Past work has shown that spontaneous retinal waves provide the correlated activity necessary to refine the development of gross topographic maps in downstream visual areas, such as retinotopy and eye-specific segregation, but it is unclear whether waves also convey information that instructs the development of higher-order visual response properties, such as direction selectivity, at eye opening. ### RATIONALE Spontaneous retinal waves exhibit stereotyped changing spatiotemporal patterns throughout development. To characterize the spatiotemporal properties of waves during development, we used one-photon wide-field calcium imaging of retinal axons projecting to the superior colliculus in awake neonatal mice. We identified a consistent propagation bias that occurred during a transient developmental window shortly before eye opening. Using quantitative analysis, we investigated whether the directionally biased retinal waves conveyed ethological information relevant to future visual inputs. To understand the origin of directional retinal waves, we used pharmacological, optogenetic, and genetic strategies to identify the retinal circuitry underlying the propagation bias. Finally, to evaluate the role of directional retinal waves in visual system development, we used pharmacological and genetic strategies to chronically manipulate wave directionality and used two-photon calcium imaging to measure responses to visual motion in the midbrain superior colliculus immediately after eye opening. ### RESULTS We found that spontaneous retinal waves in mice exhibit a distinct propagation bias in the temporal-to-nasal direction during a transient window of development (postnatal day 8 to day 11). The spatial geometry of directional wave flow aligns strongly with the optic flow pattern generated by forward self-motion, a dominant natural optic flow pattern after eye opening. We identified an intrinsic asymmetry in the retinal circuit that enforced the wave propagation bias involving the same circuit elements necessary for motion detection in the adult retina, specifically asymmetric inhibition from starburst amacrine cells through γ-aminobutyric acid type A (GABAA) receptors. Finally, manipulation of directional retinal waves, through either the chronic delivery of gabazine to block GABAergic inhibition or the starburst amacrine cell–specific mutation of the FRMD7 gene, impaired the development of responses to visual motion in superior colliculus neurons downstream of the retina. ### CONCLUSION Our results show that spontaneous activity in the developing retina prior to vision onset is structured to convey essential information for the development of visual response properties before the onset of visual experience. Spontaneous retinal waves simulate future optic flow patterns produced by forward motion through space, due to an asymmetric retinal circuit that has an evolutionarily conserved link with motion detection circuitry in the mature retina. Furthermore, the ethologically relevant information relayed by directional retinal waves enhances the development of higher-order visual function in the downstream visual system prior to eye opening. These findings provide insight into the activity-dependent mechanisms that regulate the self-organization of brain circuits before sensory experience begins. ![Figure][2] Origin and function of directional retinal waves. ( A ) Imaging of retinal axon activity reveals a propagation bias in spontaneous retinal waves (scale bar, 500 μm). ( B ) Cartoon depiction of wave flow vectors projected onto visual space. Vectors (black arrows) align with the optic flow pattern (red arrows) generated by forward self-motion. ( C ) Asymmetric GABAergic inhibition in the retina mediates wave directionality. ( D ) Developmental manipulation of wave directionality disrupts direction-selective responses in downstream superior colliculus neurons at eye opening. The ability to perceive and respond to environmental stimuli emerges in the absence of sensory experience. Spontaneous retinal activity prior to eye opening guides the refinement of retinotopy and eye-specific segregation in mammals, but its role in the development of higher-order visual response properties remains unclear. Here, we describe a transient window in neonatal mouse development during which the spatial propagation of spontaneous retinal waves resembles the optic flow pattern generated by forward self-motion. We show that wave directionality requires the same circuit components that form the adult direction-selective retinal circuit and that chronic disruption of wave directionality alters the development of direction-selective responses of superior colliculus neurons. These data demonstrate how the developing visual system patterns spontaneous activity to simulate ethologically relevant features of the external world and thereby instruct self-organization. [1]: /lookup/doi/10.1126/science.abd0830 [2]: pending:yes


Video game 'FIFA 22' gets more realism thanks to 22-player motion capture matches

USATODAY - Tech Top Stories

To bring more realism to "FIFA 22," EA Sports went to extremes on the pitch – and brought inclusivity to its announcing team. The video game publisher had 22 players put on Xsens motion capture suits and then play competitive matches in Spain. All that data – more than 8.7 million frames of advanced match capture, EA Sports says – will be used to create real-time soccer gameplay animations as players mash controller buttons. And the game maker also is bringing its first female announcer to the game: Alex Scott, who played for the English national team and Arsenal of the Women's Super League. "This is a big moment for FIFA, for football and women and girls across the world," she said on Twitter and Instagram.


Technology Play at Tokyo Olympics 2020 - Express Computer

#artificialintelligence

The global pandemic has created havoc around the world. But the world has to move on and the show must go on with all the precautions. Tokyo Olympics 2020 is finally happening in 2021, with a grand opening ceremony on this Friday, 23rd July. Under these circumstances, there are undoubtedly many thoughts and opinions surrounding the event. While actual sporting games are played at various arenas of Olympic stadiums, technology is playing a leading role, making Tokyo 2020 the most innovative Olympic Games in the history.


Pittsburgh Tech Company Wins $500K In Artificial Intelligence Competition

#artificialintelligence

Pittsburgh technology company Marinus Analytics won third place in the IBM Watson AI XPRIZE competition Wednesday, beating out nearly 800 competitors around the world. Marinus uses artificial intelligence to sift through big data and help law enforcement agencies stop human trafficking and recover victims. "We had detectives saying, 'When we're looking for a missing child, the best we can do is print out a photo of their face, tape it to our computer screen, and scroll manually through online ads, and hope that we find them,'" Marinus president Emily Kennedy said in a video from the competition. The company's main product is Traffic Jam, a tool that uses facial recognition and other analytics to help law enforcement establish patterns and make connections across many trafficking websites with thousands of data points. Marinus estimates their software helped support 6,800 trafficking victims over a two-year span.


Searching for ROI in Artificial Intelligence Deployments

#artificialintelligence

Anyone with any doubts about the interest in AI and its use across enterprise technologies only needs to look at the example of the Intelligent Document Processing (IDP) market and the kind of verticals that are investing in it to quash those doubts. According to the Everest Group's recently published report, Intelligent Document Processing (IDP) State of the Market Report 2021 (purchase required) the market for this segment alone is estimated at $700-750 million in 2020 and expected to grow at a rate of 55-65% over the next year. Cost impact is now the key driver for intelligent document processing adoption, closely followed by improving operational efficiency and productivity. These solutions blend AI technologies to efficiently process all types of documents and feed the output into downstream applications. Optical character recognition (OCR), computer vision, machine learning (ML) and deep learning models, and natural language processing (NLP) are the key core technologies powering IDP capabilities.


A lidar dev kit that plugs-and-plays out of the box

ZDNet

A foundational technology in autonomous vehicles, lidar is steadily making its way into a broader range of robots thanks to plummeting prices. Case in point, a company called Seoul Robotics just launched a ready-to-go, plug-and-play lidar perception system that can be deployed out of the box. Lidar, which was cost-prohibitive for most applications as little as five years ago, may be the key to unlocking a world in which robots take to the streets en masse. But for that to happen, developers need not only the hardware but the software designed for easy integration. "First and foremost, lidar sensors do not work without sophisticated perception software. The lidar industry is investing billions of dollars on sensors without even considering the software needed to interpret the data into actionable solutions," says HanBin Lee, CEO of Seoul Robotics.


An Intuitive Look at CNNs

#artificialintelligence

Each time you unlock your smartphone using Face ID or use real-time Google Translate with your camera, something insane is going on behind the scenes! CNNs are the backbone of many amazing applications and tools that we use all the time. This post will explain the intuition behind the workings of CNNs, without delving into the complex probability functions and math equations. Everyone should have an opportunity to learn the basics about these tools, given how they are deeply ingrained in our lives now. For the nerdy folks, here is one of the best explanations provided by Stanford University.