processing
- North America > United States > Colorado > Jefferson County > Golden (0.15)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Maryland > Baltimore (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Vision (0.95)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.93)
Adaptive Iterative Compression for High-Resolution Files: an Approach Focused on Preserving Visual Quality in Cinematic Workflows
Melo, Leonardo, Litaiff, Filipe
This study presents an iterative adaptive compression model for high-resolution DPX-derived TIFF files used in cinematographic workflows and digital preservation. The model employs SSIM and PSNR metrics to dynamically adjust compression parameters across three configurations (C0, C1, C2), achieving storage reductions up to 83.4 % while maintaining high visual fidelity (SSIM > 0.95). Validation across three diverse productions - black and white classic, soft-palette drama, and complex action film - demonstrated the method's effectiveness in preserving critical visual elements while significantly reducing storage requirements. Professional evaluators reported 90% acceptance rate for the optimal C1 configuration, with artifacts remaining below perceptual threshold in critical areas. Comparative analysis with JPEG2000 and H.265 showed superior quality preservation at equivalent compression rates, particularly for high bit-depth content. While requiring additional computational overhead, the method's storage benefits and quality control capabilities make it suitable for professional workflows, with potential applications in medical imaging and cloud storage optimization.
- Workflow (0.83)
- Research Report (0.82)
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
- Energy > Oil & Gas > Upstream (0.50)
- Health & Medicine > Diagnostic Medicine > Imaging (0.48)
Reviews: An Online Sequence-to-Sequence Model Using Partial Conditioning
This is a well-done paper. It attacks a problem that is worthwhile: how to construct and train a sequence-to-sequence model that can operate on-line instead of waiting for an entire input to be received. It clearly describes an architecture for solving the problem, and walks the reader through the issues in the design of each component in the architecture: next-step prediction, the attention mechanism, and modeling the ends of blocks. It clearly explains the challenges that need to be overcome train the model and perform inference with it, and proposes reasonable approximate algorithms for training and inference. The speech recognition experiments used to demonstrate the utility of the transducer model and to explore design issues such as maintenance of recurrent state across block boundaries, block size, design of the attention mechanism, and depth of the model are reasonable.
Reviews: Processing of missing data by neural networks
The paper provides a theoretical and practical justification on using a density function to represent missing data while training a neural networks. An obvious upside is that training can be done with incomplete data, unlike denoising autoencoder for example; this can be very helpful in many applications. My comments are: - It is stated that if all the attributes are complete then the density is not used; if we have access to a huge amount of complete training data and relatively small amount of training missing data, how trustworthy is our estimation of density function? Can't we benefit from the complete data? Do we really have to remove attributes as is done in ESR task? - In the above case, would denoising autoencoder outperform? - How would the generalized activation impact the training time?
AI Data Processing: Near-Memory Compute for Energy-Efficient Systems
Almost universally, today's systems must operate within limited system-level power budgets. For these power-bound systems, saving energy anywhere in the system enables more energy for compute and hence higher system performance. A tantalizing opportunity exists to achieve system-energy savings by keeping data commutes between memory and processing as short as possible. Energy savings should be the primary goal, our North Star for computing near memory. At the recent International Solid-State Circuits Conference (ISSCC), I gave a presentation titled: "We have rethought our commute; Can we rethink our data's commute?"
MWC Barcelona 2019: 5G Is Putting Robots' Heads in the Cloud
I almost missed out on the vanguard of the 5G robot revolution because I don't drink coffee. The near miss occurred on Monday at mobile network trade association GSMA's booth at MWC Barcelona (formerly called Mobile World Congress), where Dal.Komm Coffee was demonstrating a coffee-serving robot. Thankfully for non-coffee drinking weirdos like me, hot chocolate was also available. After placing an order via a smartphone nearby, a robotic arm behind a glass panel juggled cups, operated coffee makers, and gently placed drinks on trays so they could be collected by waiting humans. According to a representative of Dal.Komm, the robot's precise movement was only possible with a 5G network provided by the KT Corporation, the Korean teleco.
3 Companies Using Artificial Intelligence to Their Advantage
Artificial intelligence (AI) is already affecting our lives in many ways. From intelligent video curation on Alphabet's (NASDAQ:GOOG) (NASDAQ:GOOGL) YouTube and Google web search to Apple's (NASDAQ:AAPL) Siri personal assistant, AI is already making our lives easier. AI can also help corporations and customers fight against rapidly evolving cyberthreats. For instance, FireEye's (NASDAQ:FEYE) Helix cybersecurity platform is able to automate threat detection and prevention with the help of this emerging technology. The early adoption of AI by Alphabet, Apple, and FireEye could help them steal a march over rivals.
- Information Technology > Services (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Networks (0.99)
- Government > Military > Cyberwarfare (0.52)
Why AI Could Be Entering a Golden Age - Knowledge@Wharton
The quest to give machines human-level intelligence has been around for decades, and it has captured imaginations for far longer -- think of Mary Shelley's Frankenstein in the 19th century. Artificial intelligence, or AI, was born in the 1950s, with boom cycles leading to busts as scientists failed time and again to make machines act and think like the human brain. But this time could be different because of a major breakthrough -- deep learning, where data structures are set up like the brain's neural network to let computers learn on their own. Together with advances in computing power and scale, AI is making big strides today like never before. Frank Chen, a partner specializing in AI at top venture capital firm Andreessen Horowitz, makes a case that AI could be entering a golden age.
So, bots you say… – The AI guys – Medium
It is very likely that you've heard all the buzz that has been going lately about the chatbots, and how they're going to revolutionize everything in the coming years, but if you haven't, let me guide you through the revolution. Well, fear no more, dear reader, this is (part one of) all you need to know about chatbots. In general terms, a bot is a piece of software that automates a task, but talking specifically about chatbots, we come to the concept of automating an interaction through a conversational UI. But don't mind my fancy wording. Chatbots are a way in which you can automate a written conversation, simulating an interaction between two real human beings.