Results


AI everywhere

#artificialintelligence

"We invented a computing model called GPU accelerated computing and we introduced it almost slightly over 10 years ago," Huang said, noting that while AI is only recently dominating tech news headlines, the company was working on the foundation long before that. Nvidia's tech now resides in many of the world's most powerful supercomputers, and the applications include fields that were once considered beyond the realm of modern computing capabilities. Now, Nvidia's graphics hardware occupies a more pivotal role, according to Huang – and the company's long list of high-profile partners, including Microsoft, Facebook and others, bears him out. GTC, in other words, has evolved into arguably the biggest developer event focused on artificial intelligence in the world.


*Applause* YouTube's caption upgrade shows how machine learning is helping the disabled

#artificialintelligence

FCC rules require TV stations to provide closed captions that convey speech, sound effects, and audience reactions such as laughter to deaf and hard of hearing viewers. YouTube isn't subject to those rules, but thanks to Google's machine-learning technology, it now offers similar assistance. YouTube has used speech-to-text software to automatically caption speech in videos since 2009 (they are used 15 million times a day). Today it rolled out algorithms that indicate applause, laughter, and music in captions. More sounds could follow, since the underlying software can also identify noises like sighs, barks, and knocks.


The Pint-Sized Supercomputer That Companies Are Scrambling to Get

MIT Technology Review

To companies grappling with complex data projects powered by artificial intelligence, a system that Nvidia calls an "AI supercomputer in a box" is a welcome development. Early customers of Nvidia's DGX-1, which combines machine-learning software with eight of the chip maker's highest-end graphics processing units (GPUs), say the system lets them train their analytical models faster, enables greater experimentation, and could facilitate breakthroughs in science, health care, and financial services. Data scientists have been leveraging GPUs to accelerate deep learning--an AI technique that mimics the way human brains process data--since 2012, but many say that current computing systems limit their work. Faster computers such as the DGX-1 promise to make deep-learning algorithms more powerful and let data scientists run deep-learning models that previously weren't possible. The DGX-1 isn't a magical solution for every company.


This is why dozens of companies have bought Nvidia's $129,000 deep-learning supercomputer in a box

#artificialintelligence

To companies grappling with complex data projects powered by artificial intelligence, a system that Nvidia calls an "AI supercomputer in a box" is a welcome development. Early customers of Nvidia's DGX-1, which combines machine-learning software with eight of the chip maker's highest-end graphics processing units (GPUs), say the system lets them train their analytical models faster, enables greater experimentation, and could facilitate breakthroughs in science, health care, and financial services. Data scientists have been leveraging GPUs to accelerate deep learning--an AI technique that mimics the way human brains process data--since 2012, but many say that current computing systems limit their work. Faster computers such as the DGX-1 promise to make deep-learning algorithms more powerful and let data scientists run deep-learning models that previously weren't possible. The DGX-1 isn't a magical solution for every company.


17 for '17: Microsoft researchers on what to expect in 2017 and 2027 - Next at Microsoft

#artificialintelligence

This week we are celebrating Computer Science Education Week around the globe. In this "age of acceleration," in which advances in technology and the globalization of business are transforming entire industries and society itself, it's more critical than ever for everyone to be digitally literate, especially our kids. This is particularly true for women and girls who, while representing roughly 50 percent of the world's population, account for less than 20 percent of computer science graduates in 34 OECD countries, according to this report. This has far-reaching societal and economic consequences. By 2020, the U.S. Bureau of Labor Statistics predicts that there will be 1.4 million computing jobs but just 400,000 computer science students with the skills to apply for those jobs.


News AICML

#artificialintelligence

Medical, Agricultural, and Computing Science Researchers at the University of Alberta and the AICML have developed a new test to detect E. coli. The PFM Scheduling Services website is now available here. The AICML has just released a new video talking about what machine learning is and what it can do for you. AICML researcher Patrick Pilarski recently gave a talk at TEDx Edmonton. The Critterbot Project is an initiative of the Reinforcement Learning and Artificial Intelligence (RLAI) lab at the University of Alberta.


When big data gets too big, this machine-learning algorithm may be the answer

#artificialintelligence

Big data may hold a world of untapped potential, but what happens when your data set is bigger than your processing power can handle? A new algorithm that taps quantum computing may be able to help. That's according to researchers from MIT, the University of Waterloo and the University of Southern California who published a paper Monday describing a new approach to handling massively complex problems. By combining quantum computing and topology -- a branch of geometry -- the new machine-learning algorithm can streamline highly complex problems and put solutions within closer reach. Topology focuses on properties that stay the same even when something is bent and stretched, and it's particularly useful for analyzing the connections in complex networks such as the U.S. power grid or the global interconnections of the Internet.


10 UK IoT degree courses covering UI, AI & machine learning

#artificialintelligence

This includes notions of data analysis, storage and processing, distributed and networked systems (covering areas such as algorithms for distributed coordination, time-synchronisation, scalable storage, virtualisation and cloud computing technologies), and information security. The four key areas of the course include the IoT ecosystem, data science (including time series data), programming and problem solving, and online engagement and programming exercises. Taught through the Department of Computing of the institution's Engineering Faculty, the course teaches students programming skills including Prolog and Matlab, software engineering, and computing trends and their applications in industrial scenarios. Specific modules include artificial intelligence, computational management science, distributed systems, machine learning, visual information processing and software engineering.


Artificial Intelligence News: Artificial Intelligence News Issue 30

#artificialintelligence

Artificial Intelligence (AI) is a modern computing field that attempts to augment machine development to the point where it transcends human intelligence. The success to creating a successful AI is contingent upon success... Have you heard the one about how our jobs are about to be snatched away by machines? Despite the latest reports claiming the development of artificial intelligence (AI) could end humanity within decades, it recently proved its use in cancer diagnosis. Dr. Ehud Reiter Chief Scientist, Arria NLG Professor of Computing Science, University of Aberdeen Artificial Intelligence has made huge advances in recent years in many areas, including language processing, vision, and machine learning; we are also seeing the emergence of platforms that integrate different kinds of AI, such as IBM Watson (Arria is a Watson ecosystem partner).


Statisticians step up to aid neurological health research - Faculty of Science - University of Alberta

#artificialintelligence

Linglong Kong (mathematical and statistical sciences) is the co-lead of a new collaboration of 18 researchers across North America working together to improve the way neuroimaging data is analyzed. Enter University of Alberta statistician Linglong Kong and his new collaboration of 18 researchers across North America. Another collaborator, computing science professor Russ Greiner, is using computers to find patterns in a process known as statistical machine learning. The project, Joint Analysis of Neuroimaging Data: High-Dimensional Problems, Spatio-Temporal Models and Computation, is funded through 2019 through 180,000 in Collaborative Research Team (CRT) Project funding from the Canadian Statistical Sciences Institute (CANSSI).