Information Technology


The economics of Artificial Intelligence today

#artificialintelligence

Economists have been studying the relationship between technological change, productivity and employment since the beginning of the discipline with Adam Smith's pin factory. It should therefore not come as a surprise that AI systems able to behave appropriately in a growing number of situations - from driving cars to detecting tumours in medical scans - have caught their attention. In September 2017, a group of distinguished economists gathered in Toronto to set out a research agenda for the Economics of Artificial Intelligence (AI). They covered questions such as what is economically unique about AI, what will be its impacts, and what are the right policies to enhance its benefits. I recently had the privilege of attending the third edition of this conference in Toronto, and to witness first-hand how this agenda has evolved in the last two years.


Artificial Intelligence is here but can we make it trustworthy? - Vox Markets

#artificialintelligence

On Monday 8th April 2019, the European Commission's High-Level Expert Group on Artificial Intelligence (AI HLEG) revealed ethics guidelines aimed at forming best practices for creating "trustworthy AI." In fact, many argue this issue of trust in the AI system is one of the main hurdles the technology must overcome for more widespread implementation. A Forbes survey found that nearly 42% of respondents "could not cite a single example of AI that they trust"; in another survey, when respondents were asked what emotion best described their feeling towards AI, "Interested" was the most common response (45%), but it was closely followed by "concerned" (40.5%), "skeptical" (40.1%), "unsure" (39.1%), and "suspicious" (29.8%). The Commission's guidelines are a new roadmap for businesses to align their AI systems. While these guidelines are not policy, it is easy to imagine that they will serve as the building blocks for such regulations.


r/MachineLearning - [D] Benchmarking /Transformers on both PyTorch and TensorFlow

#artificialintelligence

Since our recent release of Transformers (previously known as pytorch-pretrained-BERT and pytorch-transformers), we've been working on a comparison between the implementation of our models in PyTorch and in TensorFlow. We've released a detailed report where we benchmark each of the architectures hosted on our repository (BERT, GPT-2, DistilBERT, ...) in PyTorch with and without TorchScript, and in TensorFlow with and without XLA. We benchmark them for inference and the results are visible in the following spreadsheet. We would love to hear your thoughts on the process.


I Brake for Autonomous Vehicle Braking AGL (Above Ground Level)

#artificialintelligence

Trust me, I have no intention of trusting autonomous vehicle braking. One of the terms we see pop up in almost every technical vector is autonomous vehicles. As with 5G, the autonomous vehicle landscape is fraught with hype. That has even spilled over to the consumer marketing arena with tons of ads for automobiles showing hands-off braking, lane navigation, self-parking, and more. Depending upon with whom one speaks, autonomous vehicles are anywhere from level 3 to level 5. Of course, the only one who believes we are at level 5 is Elon Musk, with his claims for Teslas.


4 Ways Machine Learning Produces Actionable Threat Intelligence

#artificialintelligence

A big challenge in collecting and analyzing intelligence has always been scalability. Good, actionable intelligence takes expertise to develop. Let's say you're a government trying to gather information on a foreign power. You'll need experts who speak the language, know the culture well enough to blend in, have the right skill sets, and are sympathetic to your goals. Finding enough experts who meet those criteria will be difficult -- and even then, it still might not be enough to get regular, actionable intelligence.


Decision Tree Classifier from Scratch: Classifying Student's Knowledge Level

#artificialintelligence

In simple words, Decision Tree Classifier is a Supervised Machine learning algorithm which is used for supervised classification problems. Under the hood in decision tree, each node asks a True or False question about one of the features and moves left or right with respect to the decision. You can learn more about Decision Tree from here. We are going to use a Machine Learning algorithms to find the patterns on the historical data of the students and classify their knowledge level, and for that we are going to write our own simple Decision Tree Classifier from scratch by using Python Programming Language. Though i am going to explain everything along the way, it will not be a basic level explanation.


A Primer on Machine Learning and Deep Learning for Educators

#artificialintelligence

The field of learning has evolved drastically over the years. With the advent of e-learning and learning management systems, the process of learning has gone beyond the traditional model of classroom training. Now it is possible for instructors and teachers to reach a wider, international audience through online courses hosted on cloud based LMS platforms. Students can access these courses from any place in the world at any time, by simply logging into their account using their login credentials. Although e-learning is a complete and self-sustainable medium for imparting knowledge, it also works well in conjunction with traditional classroom training.


AI, cloud, blockchain and beyond: Changing the financial world individually and in tandem

#artificialintelligence

AI has been talked about since the very early days of computing and has attained mainstream use in recent years with the likes of Amazon's Alexa and Apple's Siri. "Just as in the last 40 years, computation has enabled us to change the way we do business and create new products, AI will help us to make better decisions," Carlos Kuchovsky, chief of technology and R&D at BBVA, tells Finextra. "We are now looking at the ways in which it can help us change the way we operate and bring value." The Bank of England has recently reported that machine learning tools are in use at two thirds of UK financial firms, with the average company using it two business areas, which is expected to double in the next three years. It may be through interoperation with cloud and blockchain technology that AI's capabilities will be fully harnessed.


AI to Lead to More Jobs in 2020 - Rick's Cloud

#artificialintelligence

Artificial Intelligence (AI) is here. And AI is fast becoming an integral part of how businesses operate across the world. Many are apprehensive of the change however and are fearful that AI will replace jobs. Although, research has shown us otherwise. John David Lovelock, Gartner VP for Research, shared with Information Week that starting in 2020, "automation and AI will cause the industry to add more jobs than it causes the industry to take away."