Collaborating Authors


Samsung to hire over 1,000 engineers from top colleges


These young engineers will work on various domains like artificial intelligence, machine learning, IoT, deep learning, networks, image processing, …

Automated deep learning-based paradigm for high-risk plaque detection in B-mode …


Automated deep learning-based paradigm for high-risk plaque detection in B-mode common carotid ultrasound scans: an asymptomatic Japanese cohort study.

The impact of deepfakes: How do you know when a video is real?


In a world where seeing is increasingly no longer believing, experts are warning that society must take a multi-pronged approach to combat the potential harms of computer-generated media. As Bill Whitaker reports this week on 60 Minutes, artificial intelligence can manipulate faces and voices to make it look like someone said something they never said. The result is videos of things that never happened, called "deepfakes." Often, they look so real, people watching can't tell. Just this month, Justin Bieber was tricked by a series of deepfake videos on the social media video platform TikTok that appeared to be of Tom Cruise.

Box introduces new anti-ransomware capabilities and other new features at BoxWorks 2021 …


"Deep learning technology complements traditional hash-based or … The machine learning capabilities coming to Box Shield are also being put to use …

Researchers Warn Of 'Dangerous' Artificial Intelligence-Generated Disinformation At Scale - Breaking Defense


A "like" icon seen through raindrops. WASHINGTON: Researchers at Georgetown University's Center for Security and Emerging Technology (CSET) are raising alarms about powerful artificial intelligence technology now more widely available that could be used to generate disinformation at a troubling scale. The warning comes after CSET researchers conducted experiments using the second and third versions of Generative Pre-trained Transformer (GPT-2 and GPT-3), a technology developed by San Francisco company OpenAI. GPT's text-generation capabilities are characterized by CSET researchers as "autocomplete on steroids." "We don't often think of autocomplete as being very capable, but with these large language models, the autocomplete is really capable, and you can tailor what you're starting with to get it to write all sorts of things," Andrew Lohn, senior research fellow at CSET, said during a recent event where researchers discussed their findings.

Capitalizing on the many artificial neural network uses – SearchEnterpriseAI


"The use of deep learning and visual experiences has been a key focus for us," … leader of artificial intelligence and head of data engineering at KPMG.

Top 5 Machine Learning Trends in 2021-2022


In 2021, recent innovations in machine learning have made a great deal of tasks more feasible, efficient, and precise than ever before. Based on analysis of MobiDev's AI team experience, we have listed the latest innovations in machine learning to benefit businesses in 2021-2022: Trend 1. TinyML It can take time for a web request to send data to a large server for it to be processed by a machine learning algorithm and then sent back. Instead, a more desirable approach might be to use ML programs on edge devices - we can achieve lower latency, lower power consumption, lower required bandwidth, and ensure user privacy. Trend 2. AutoML Auto-ML brings improved data labeling tools to the table and enables the possibility of automatic tuning of neural network architectures. Evgeniy Krasnokutsky PhD, AI/ML Solution Architect at MobiDev, explains: "Traditionally, data labeling has been done manually by outsourced labor. This brings in a great deal of risk due to human error. Since AutoML aptly automates much of the labeling process, the risk of human error is much lower."

Human Detection of Machine-Manipulated Media

Communications of the ACM

The recent emergence of artificial intelligence (AI)-powered media manipulations has widespread societal implications for journalism and democracy,7 national security,1 and art.8,14 AI models have the potential to scale misinformation to unprecedented levels by creating various forms of synthetic media.21 For example, AI systems can synthesize realistic video portraits of an individual with full control of facial expressions, including eye and lip movement;11,18,34,35,36 clone a speaker's voice with a few training samples and generate new natural-sounding audio of something the speaker never said;2 synthesize visually indicated sound effects;28 generate high-quality, relevant text based on an initial prompt;31 produce photorealistic images of a variety of objects from text inputs;5,17,27 and generate photorealistic videos of people expressing emotions from only a single image.3,40 The technologies for producing machine-generated, fake media online may outpace the ability to manually detect and respond to such media. We developed a neural network architecture that combines instance segmentation with image inpainting to automatically remove people and other objects from images.13,39 Figure 1 presents four examples of participant-submitted images and their transformations. The AI, which we call a "target object removal architecture," detects an object, removes it, and replaces its pixels with pixels that approximate what the background should look like without the object.