Goto

Collaborating Authors

Machine Learning


As AI Becomes More Ever Capable, Will It End Up Helping, Or Hindering, The Hackers?

#artificialintelligence

Hacking events have increasingly been in the news this year, as a range of serious ransomware and supply chain hacks have wrecked chaos on businesses and infrastructure. The latest (as of July 2021) is a supply-chain-ransomware attack against Miami-based software firm Kaseya, affecting 1500 of its customers - with the hackers (threat-actors) demanding $70 million in cryptocurrency to release the data. According to the World Economic Forum, cyber-attacks now stand side by side with climate change and natural disasters as one of the most pressing threats to humanity. No doubt ways will eventually be found to detect and pre-empt these latest styles of attack. The cybersecurity industry is defined by continual, if largely gradual, innovation - as new threats emerge, technology that protects, detects and responds to the attacks also emerges. This cat and mouse dynamic has been a fundamental trait of the industry to date: a permanently iterating relationship that supercharges the development of new technologies on both sides, where even a small edge over adversaries can pay dividends (or ransoms).


MLOps in 2021: The pillar for seamless Machine Learning Lifecycle

#artificialintelligence

MLOps is the new terminology defining the operational work needed to push machine learning projects from research mode to production. While Software Engineering involves DevOps for operationalizing Software Applications, MLOps encompass the processes and tools to manage end-to-end Machine Learning lifecycle. Machine Learning defines the models' hypothesis learning relationships among independent(input) variables and predicting target(output) variables. Machine Learning projects involve different roles and responsibilities starting from the Data Engineering team collecting, processing, and transforming data, Data Scientists experimenting with algorithms and datasets, and the MLOps team focusing on moving the trained models to production. Machine Learning Lifecycle represents the complete end-to-end lifecycle of machine learning projects from research mode to production.


Platform teaches nonexperts to use machine learning

#artificialintelligence

Machine-learning algorithms are used to find patterns in data that humans wouldn't otherwise notice, and are being deployed to help inform decisions big and small – from COVID-19 vaccination development to Netflix recommendations. New award-winning research from the Cornell Ann S. Bowers College of Computing and Information Science explores how to help nonexperts effectively, efficiently and ethically use machine-learning algorithms to better enable industries beyond the computing field to harness the power of AI. "We don't know much about how nonexperts in machine learning come to learn algorithmic tools," said Swati Mishra, a Ph.D. student in the field of information science. "The reason is that there's a hype that's developed that suggests machine learning is for the ordained." Mishra is lead author of "Designing Interactive Transfer Learning Tools for ML Non-Experts," which received a Best Paper Award at the annual ACM CHI Virtual Conference on Human Factors in Computing Systems, held in May. As machine learning has entered fields and industries traditionally outside of computing, the need for research and effective, accessible tools to enable new users in leveraging artificial intelligence is unprecedented, Mishra said.


U of T prof's AI startup, Deep Genomics, raises US$180 million: The Globe and Mail

#artificialintelligence

Deep Genomics, an artificial intelligence startup founded by the University of Toronto's Brendan Frey, has secured US$180 million from investors, including Japanese multinational Softbank and Canada Pension Plan Investments, the Globe and Mail reported. Launched in 2015, the startup uses machine learning to develop treatments for genetic diseases. According to the Globe and Mail, Deep Genomics currently has 10 drugs in pre-clinical development, four of which are set to enter human trials by mid-2023. It is also working with San Francisco Bay-area biopharmaceutical company BioMarin Pharmaceutical Inc. to identify drug candidates for rare diseases. "These are all new chemical entities that would not exist" without Deep Genomics' technology," Frey, who is CEO of Deep Genomics and a professor in U of T's Faculty of Applied Science & Engineering, told the Globe.


Machine learning applications need less data than has been assumed

#artificialintelligence

A combined team of researchers from the University of British Columbia and the University of Alberta has found that at least some machine learning applications can learn from far fewer examples than has been assumed. In their paper published in the journal Nature Machine Intelligence, the group describes testing they carried out with machine learning applications created to predict certain types of molecular structures. Machine learning can be used in a wide variety of applications--one of the most well-known is learning to spot people or objects in photographs. Such applications typically require huge amounts of data for training. In this new effort, the researchers have found that in some instances, machine learning applications do not need such huge amounts of data to be useful.


The Rise of the Transformers: Explaining the Tech Underlying GPT-3

#artificialintelligence

The capabilities of GPT -3 has led to a debate between some as to whether or not GPT-3 and its underlying architecture will enable Artificial General Intelligence (AGI) in the future against those (many being from the school of logic and symbolic AI) who believe that without some form of logic there can be no AGI. The truth of the matter is that we don't know as we don't really fully understand the human brain. With science and engineering we work upon the basis of observation and testing. This section also addresses points raised by Esaú Flores. Gary Grossman in an article entitled Are we entering the AI Twilight Zone between AI and AGI? observed that in February 2020, Geoffrey Hinton, the University of Toronto professor who is a pioneer of Deep Learning, noted: "There are one trillion synapses in a cubic centimeter of the brain. If there is such a thing as general AI, [the system] would probably require one trillion synapses." The human brain has a huge number of synapses. Each of the 1011 (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 1015 synapses (1 quadrillion).


OpenAI releases Triton, a programming language for AI workload optimization

#artificialintelligence

All the sessions from Transform 2021 are available on-demand now. OpenAI today released Triton, an open source, Python-like programming language that enables researchers to write highly efficient GPU code for AI workloads. Triton makes it possible to reach peak hardware performance with relatively little effort, OpenAI claims, producing code on par with what an expert could achieve in as few as 25 lines. Deep neural networks have emerged as an important type of AI model, capable of achieving state-of-the-art performance across natural language processing, computer vision, and other domains. The strength of these models lies in their hierarchical structure, which generates a large amount of highly parallelizable work well-suited for multicore hardware like GPUs.


Synthetic Data May Be The Solution to AI Privacy Concerns

#artificialintelligence

AI is hungry for data. Training and testing the machine-learning tools to perform desired tasks consumes huge lakes of data. More data often means better AI. Yet gathering this data, especially data concerning people's behavior and transactions, can be risky. For example, In January of this year, the US FTC reached a consent order with a company called Everalbum, a developer of photography apps.


OpenAI proposes Triton language as an alternative to Nvidia's CUDA

ZDNet

Graphics processing units from Nvidia are too hard to program, including with Nvidia's own programming tool, CUDA, according to artificial intelligence research firm OpenAI. The San Francisco-based AI startup, which is backed by Microsoft and VC firm Khosla ventures, on Wednesday introduced the 1.0 version a new programming language specially crafted to ease that burden, called Triton, detailed in a blog post, with the link to GitHub source code. OpenAI claims Triton can deliver substantial ease-of-use benefits over coding in CUDA for some neural network tasks at the heart of machine learning forms of AI such as matrix multiplies. "Our goal is for it to become a viable alternative to CUDA for Deep Learning," the leader of the effort, OpenAI scientist Philippe Tillet, told ZDNet via email. Triton "is for machine learning researchers and engineers who are unfamiliar with GPU programming despite having good software engineering skills," said Tillet.


Council Post: How To Build A Perfect AI Team

#artificialintelligence

Artificial intelligence (AI) is now on a mission to permeate every industry. From e-commerce and healthcare to travel and finance, AI has made its way to just about every type of industry. In fact, the adoption rate of AI has increased by more than 270%, according to Gartner, Inc. Moreover, 37% of all types of businesses are now using AI-driven technologies such as natural language processing, predictive analysis, machine learning and robotic process automation. Therefore, if you're still not using AI in your business, then it's highly likely your competitors are already doing so, and very soon, you'll be left behind. That's why we have come up with this article that will allow you to build your own AI team to eliminate your existing bottlenecks and achieve your business goals.