Collaborating Authors


A Microsoft custom data type for efficient inference - Microsoft Research


AI is taking on an increasingly important role in many Microsoft products, such as Bing and Office 365. In some cases, it's being used to power outward-facing features like semantic search in Microsoft Word or intelligent answers in Bing, and deep neural networks (DNNs) are one key to powering these features. One aspect of DNNs is inference--once these networks are trained, they use inference to make judgments about unknown information based on prior learning. In Bing, for example, DNN inference enables multiple search scenarios including feature extraction, captioning, question answering, and ranking, which are all important tasks for customers to get accurate, fast responses to their search queries. These scenarios in Bing have stringent latency requirements and need to happen at an extremely large scale.

The Gap between Research and Robust AI


Would you say Deep Learning models have become so good, that robust AI systems are no longer a dream, but a reality? Do you think you can safely use the latest models published by researchers in any real-world problem, like self-driving cars? Convinced that machines are already better than humans at processing and understanding images? Until I realized it is possible to deceive a state-of-the-art model, like DeepMind Perceiver, with a few lines of code. In this article, I will show you how you can do that in less than 10 minutes through a hands-on example.

How #ArtificialIntelligence GANs work #dalle2 as an example, and how they will facilitate and…


Artificial intelligence has made a big impact on our lives over the past few years. From online shopping bots to voice assistants like Alexa, we're now more connected than ever before thanks to AI technology. But how can we use AI in other areas? In this article, I will look at how artificial intelligence (AI) can revolutionize the world of art and design by creating new works from scratch and improving existing ones. GANs, or generative adversarial networks, are a type of deep learning model that can generate images, text and other types of data.

Breaking AIs to make them better


Today's artificial intelligence systems used for image recognition are incredibly powerful with massive potential for commercial applications. Nonetheless, current artificial neural networks--the deep learning algorithms that power image recognition--suffer one massive shortcoming: they are easily broken by images that are even slightly modified. This lack of "robustness" is a significant hurdle for researchers hoping to build better AIs. However, exactly why this phenomenon occurs, and the underlying mechanisms behind it, remain largely unknown. Aiming to one day overcome these flaws, researchers at Kyushu University's Faculty of Information Science and Electrical Engineering have published in PLOS ONE a method called "Raw Zero-Shot" that assesses how neural networks handle elements unknown to them.

Detect Malicious JavaScript Code Using Machine Learning


In this article, we will consider approaches to detect obfuscated JavaScript code snippets using machine learning. Most websites use JavaScript (JS) code to make dynamic content; thus, JS code becomes a valuable attack vector against browsers, browser plug-ins, email clients, and other JS applications. Among common JS-based attacks are drive-by-download, cross-site scripting (XSS), cross-site request forgery (XSRF), malvertising/malicious advertising, and others. Most of the malicious JS codes are obfuscated in order to hide what they are doing and to avoid being detected by signature-based security systems. In other words, the obfuscation technique is a sequence of confusing code transformations to compromise its understandability, but at the same time to save its functionality.

QuantumTags: Three-Layer Authentication Through Self-Assembly Quantum-Dot Inkjet Printing for…


Integrity and trust are at the heart of humanity and allow for peaceful nations, bonds and trust between multiple parties. However, when that integrity is played with, many become overprotective, people cannot enjoy the common object and once trustful systems are tampered with, connections between communities are disrupted. On a global scale, counterfeit goods are where this integrity is played with the most. Counterfeit pharmaceuticals cause the deaths of millions in developing nations2, counterfeit batteries pose risks to everyday items bursting at any moment and overall, these goods cost the global market over one trillion dollars. Current solutions can easily be reverse-engineered and as the counterfeit epidemic surges, the world needs a solution to make counterfeit goods impossible. The integration of nanotags and harnessing the randomness and uniqueness of quantum dots allow for unclonable tags on each product. These tags are verified by the end-user through a deep learning algorithm. The tags are unclonable by any quantum computer let alone any attacker, ensuring the security of millions of lives and billions of dollars. The global market of counterfeit goods is currently $1.8 trillion and this number is only increasing. During the COVID-19 pandemic, a greater urgency for counterfeiting occurred as the global demand for medical supplies continued to increase (2). Not only does counterfeiting cost the global economy trillions of dollars, but it also results in fake pharmaceutical pills, costing millions of lives (2). To add on, 500 identity frauds happen on a daily basis, showcasing how counterfeit goods are reaching an epidemic level (10). In 2015, 10% of all luxury goods in Europe were counterfeit, and the number continues to increase (10).

Top 10 most popular AI trends of the 2022 year


The tech media outlet Toolbox featured the views of 10 experts on "How will AI evolve in the next year?" Edge technology that experts should pay attention to next year was also intensively discussed. The first place was occupied by MIT's Neil Thompson research team featuring an article on the cost of energy to train deep learning systems. As a result of analyzing the improvements of the image classifier, the research team found that "to cut the error rate in half, it can be expected that 500 times more computational resources are required." "The rising cost requires researchers to devise more efficient ways to solve these problems, otherwise we will give up research on these problems, and progress will be difficult," he said.

Best 15 real-life examples of machine learning - Dataconomy


Numerous examples of machine learning show that machine learning (ML) can be extremely useful in a variety of crucial applications, including data mining, natural language processing, picture recognition, and expert systems. In all of these areas and more, ML offers viable solutions, and it is destined to be a cornerstone of our post-apocalyptic civilization. The history of machine learning shows that a good grasp of the machine learning lifecycle increase machine learning benefits for businesses significantly. There are many uncommon machine learning examples that prove this, and you will find the best ones in this article. Machine learning uses statistical methods to increase a computer's intelligence, assisting in the automatic utilization of all business data. Due to growing reliance on machine learning technologies, humans' lifestyles have undergone a significant transformation. We use Google Assistant, which uses ML principles, as an example.

Sinequa adds a neural search function to boost its enterprise platform


Sinequa said its neural search function can answer natural language questions, thanks to four deep learning models it developed with Microsoft Azure and Nvidia teams. Enterprise search company Sinequa is adding a neural search option to its platform with the aim of giving improved accuracy and relevance to customers. Sinequa said the new AI function is the first commercially available system to use four deep learning language models. Combined with the platform's natural language processing and semantic search abilities, Sinequa said this will lead to improved question-answering and search relevance. The Sinequa Search Cloud platform is designed to help employees find relevant information and insights from all enterprise sources in any language in the context of their work.

Google, Nvidia split top marks in MLPerf AI training benchmark


MLCommons director David Kanter made the point that improvements in both hardware architectures and deep learning software have led to performance improvements on AI that are ten times what would be expected from traditional chip scaling improvements alone. Google and Nvidia split the top scores for the twice-yearly benchmark test of artificial intelligence program training, according to data released Wednesday by the MLCommons, the industry consortium that oversees a popular test of machine learning performance, MLPerf. The version 2.0 round of MLPerf training results showed Google taking the top scores in terms of lowest amount of time to train a neural network on four tasks for commercially available systems: image recognition, object detection, one test for small and one for large images, and the BERT natural language processing model. Nvidia took the top honors for the other four of the eight tests, for its commercially available systems: image segmentation, speech recognition, recommendation systems, and solving the reinforcement learning task of playing Go on the "mini Go" dataset. Also: Benchmark test of AI's performance, MLPerf, continues to gain adherents Both companies had high scores for multiple benchmark tests, however, Google did not report results for commercially available systems for the other four tests, only for those four it won. Nvidia reported results for all eight of the tests.