Griffith University researchers have developed an AI video surveillance system to detect social distancing breaches in an airport without compromising privacy. By keeping image processing gated to a local network of cameras, the team bypassed the traditional need to store sensitive data on a central system. Professor Dian Tjondronegoro from Griffith Business School says data privacy is one of the biggest concerns with this technology because the system has to constantly observe people's activities to be effective. "These adjustments are added to the central decision-making model to improve accuracy." Published in Information, Technology & People, the case study was completed at Gold Coast Airport which, pre-COVID-19 had 6.5 million passengers annually with 17,000 passengers on-site daily.
In this decade, companies across the globe have embraced the potential of artificial intelligence for digital transformation and enhanced customer experience. One important application of AI is enabling companies to use the pools of data available with them for smart business use. BMW is one of the world's leading manufacturers of premium automobiles and mobility services. BMW uses artificial intelligence in critical areas like production, research and development, and customer service. BMW also runs a project dedicated to this technology called Project AI, for efficient use of artificial intelligence.
AI in medicine, particular in pediatric medicine holds much promise in taking scarce human expertise and making it available throughout rural America and to the rest of the world. Rwanda has one pediatric cardiologist in the country. In 2015, when neural network technology succeeded in building computer algorithms which were better than humans at image recognition signaled the beginning of this renaissance in AI. But, as the above chart courtesy of Jeff Dean, head of Google Brain shows, the only way to get increasing degrees of accuracy is to have more and more data. Any of you in major metro areas will see Waymo vans driving around collecting more and more data to feed autonomous driving software development.
The power of deepfake tech to hone digital effects into incredibly realistic video can't be underestimated. We've seen a top-level Tom Cruise impersonator transformed with a high-level deepfake artist, and now companies -- and film studios -- are taking notice. Luke Skywalker's CGI face in The Mandalorian was met with a lot of criticism, and one fan's efforts to improve it resulted in a new job. Lucasfilm has hired YouTuber Shamook to ensure future projects won't have wobbly representations of actors that are either much older or perhaps even deceased now. The latter, however, remains an ethical conundrum in itself, as demonstrated by the recent Anthony Bourdain documentary.
Technology and Technological developments in this decade have led to some of the most awe-inspiring discoveries. With rapidly changing technology and systems to support them and provide back-end processing power, the world seems to be becoming a better place to live day by day. Technology has reached such new heights that nothing our ingenious mind today thinks about looks impossible to accomplish. The driving factor of such advancements in this new era of technological and computational superiority seems to be wrapped around two of the most highly debated domains and topics, namely Machine Learning & Artificial Intelligence. The canvas and ideal space that these two domains provide are unfathomable.
In the nine years since AlexNet spawned the age of deep learning, artificial intelligence (AI) has made significant technological progress in medical imaging, with more than 80 deep-learning algorithms approved by the U.S. FDA since 2012 for clinical applications in image detection and measurement. A 2020 survey found that more than 82% of imaging providers believe AI will improve diagnostic imaging over the next 10 years and the market for AI in medical imaging is expected to grow 10-fold in the same period. Despite this optimistic outlook, AI still falls short of widespread clinical adoption in radiology. A 2020 survey by the American College of Radiology (ACR) revealed that only about a third of radiologists use AI, mostly to enhance image detection and interpretation; of the two thirds who did not use AI, the majority said they saw no benefit to it. In fact, most radiologists would say that AI has not transformed image reading or improved their practices.
Leon Gatys et al. introduced the Neural Style Transfer technique in 2015 in "A Neural Algorithm of Artistic Style". As stated earlier, Neural Style Transfer is a technique of composing images in the style of another image. Neural Style Transfer (NST) refers to a class of software algorithms that manipulate digital images or videos to adapt the appearance or visual style of another image. NST algorithms are characterized by their use of deep neural networks for the sake of image transformation. If you want to go deep into the original technique, you can refer to the paper from this link.
In the age of global warming, forest fires are becoming more frequent and faster-growing. Clearly, the world needs sustainable solutions to preserve our natural resources, protect human lives, and avoid economic devastation. As an environmental advocate and sustainability enthusiast, I got to thinking about whether a technological solution can help with this daunting task. Fortunately, I am also a computer scientist, one who is all too aware of how tedious and time-consuming research can be. In such times, I often choose to play my ace in the hole by going straight to Intel's rich ecosystem--the Intel Partner Alliance. Not surprisingly, it led me to an ingenious solution: the AAEON Intelligent Forest Fire Monitoring System (Figure 1).
All the sessions from Transform 2021 are available on-demand now. In 2019, Google released Translatotron, an AI system capable of directly translating a person's voice into another language. The system could create synthesized translations of voices to keep the sound of the original speaker's voice intact. But Translatotron could also be used to generate speech in a different voice, making it ripe for potential misuse in, for example, deepfakes. This week, researchers at Google quietly released a paper detailing Translatotron's successor, Translatotron 2, which solves the original issue with Translatotron by restricting the system to retain the source speaker's voice.
In the new documentary "Roadrunner" about the life of celebrity chef Anthony Bourdain, the filmmakers made a controversial choice. The director, Morgan Neville, commissioned a software company to re-create Bourdain's voice digitally, synthesizing three lines of voice-over. The lines were statements that Bourdain wrote but never uttered before his death in 2018. The artificial-intelligence technology used to craft the fictitious audio is called "deepfake," and it has set off a debate online since food writer Helen Rosner published a piece in the New Yorker last week, "The Ethics of a Deepfake Anthony Bourdain Voice," interviewing Mr. Neville about the decision. While the audiovisual technology that allows this kind of trickery has long been in development, the word "deepfake" first emerged in late 2017.