Subject-Specific Education


From machine learning to Python language skills: 6 tech skill sets that fetch maximum salary

#artificialintelligence

With both machine learning and data analytics skill set, one can easily fetch an average pay of Rs 13.94 lakh per annum (LPA). Although knowledge of machine learning algorithms do add to the highest package, the skill set alone can fetch a handsome Rs 10.43 LPA on average. If the latest Analytics India Industry Report 2017 – Salaries & Trends report is anything to go by, one could make an average of Rs 10.40 LPA with exceptional R language skills. One of the most popular programming languages, professionals with Python skill set can make around Rs 10.12 LPA on average.


Investigating Bias In AI Language Learning

#artificialintelligence

We recommend addressing this through the explicit characterization of acceptable behavior. One such approach is seen in the nascent field of fairness in machine learning, which specifies and enforces mathematical formulations of nondiscrimination in decision-making. Another approach can be found in modular AI architectures, such as cognitive systems, in which implicit learning of statistical regularities can be compartmentalized and augmented with explicit instruction of rules of appropriate conduct . Certainly, caution must be used in incorporating modules constructed via unsupervised machine learning into decision-making systems.


The incredible ways people with disabilities customize their tech to thrive

Mashable

All it takes is a few taps of her tablet, and with a specialized app stringing letters into words, and words into phrases, her thoughts are played out loud. The music mode helps amplify low notes Rakowski can't hear otherwise, while the standard mode helps her to instruct her students. Rakowski, whose musical passion and profession rely on the ability to hear, started using hearing aids a little over a year ago. While Rakowski relies on visuals to control the volume of sounds coming through her hearing aids, Vasquez relies completely on his voice to navigate his Apple devices.


Technology and Legal Practice… How Disruptive Can It Possibly Be?

#artificialintelligence

This evening event at Westminster Law School, University of Westminster, brings together three prominent experts in the fields of artificial intelligence, robotics and law for a conversation around current developments in these areas, followed by an opportunity for the audience to engage and ask questions. Chrissie Lightfoot is a prominent international legal figure, an entrepreneur, a legal futurist, legaltech investor, writer, international keynote speaker, legal and business commentator (quoted periodically in The Times and FT), solicitor (non-practising), Honorary Visiting Fellow at the University of Westminster School of Law, and author of best-seller The Naked Lawyer and Tomorrow s Naked Lawyer. Chair: Dr Paresh Kathrani is a Senior Lecturer in Law at Westminster Law School and a member of the Centre on the Legal Profession. He has written on the challenges that AI will bring for the legal profession and chaired a panel on artificial intelligence at Westminster Law School in 2015, as well as an AI film and debate series for the Centre for Law, Society and Popular Culture, of which he is also a member, in 2016.


This shuttle bus will serve people with vision, hearing, and physical impairments--and drive itself

#artificialintelligence

Manser's employer, IBM, and an independent carmaker called Local Motors are developing a self-driving, electric shuttle bus that combines artificial intelligence, augmented reality, and smartphone apps to serve people with vision, hearing, physical, and cognitive disabilities. Future Ollis, for example, might direct visually impaired passengers to empty seats using machine vision to identify open spots, and audio cues and a mobile app to direct the passenger. For deaf people, the buses could employ machine vision and augmented reality to read and speak sign language via onboard screens or passengers' smartphones. Another potential Olli technology combines machine vision and sensors to detect when passengers leave items under their seats and issues alerts so the possessions can be retrieved, a feature meant to benefit people with age-related dementia and other cognitive disabilities.


This shuttle bus will serve people with vision, hearing, and physical impairments--and drive itself

#artificialintelligence

Manser's employer, IBM, and an independent carmaker called Local Motors are developing a self-driving, electric shuttle bus that combines artificial intelligence, augmented reality, and smartphone apps to serve people with vision, hearing, physical, and cognitive disabilities. Future Ollis, for example, might direct visually impaired passengers to empty seats using machine vision to identify open spots, and audio cues and a mobile app to direct the passenger. For deaf people, the buses could employ machine vision and augmented reality to read and speak sign language via onboard screens or passengers' smartphones. Another potential Olli technology combines machine vision and sensors to detect when passengers leave items under their seats and issues alerts so the possessions can be retrieved, a feature meant to benefit people with age-related dementia and other cognitive disabilities.


Study: We're Teaching Artificial Intelligence to Be Just as Racist and Sexist as Humans

#artificialintelligence

According to a new Princeton study, though, the engineers responsible for teaching these AI programs things about humans are also teaching them how to be racist, sexist assholes. The study, published in today's edition of Science magazine by Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan, focuses on machine learning, the process by which AI programs begin to think by making associations based on patterns observed in mass quantities of data. In a completely neutral vacuum, this would mean that AI would learn to provide responses based solely on objective, data-driven facts. To demonstrate this, Caliskan and her team created a modified version of an Implicit Association Test, an exercise that tasks participants to quickly associate concrete ideas like people of color and women with abstract concepts like goodness and evil.


Trump's Speeches Are Helping People Learn English. Really

WIRED

And across Facebook groups sharing posts focusing on language learning and linguistics, early learners are turning to Trump-speak to learn basic vocabulary and concepts. The trend emerged back in February, when a post in the Polyglots group--a collection of Facebookers who love language and discuss linguistics--shared a piece from Good about Japanese translators struggling to parse Trump's speech. Nika Nemsitsveridze, a native Georgian speaker who's been learning English for about three years, is a member of Silly Linguistics, a Facebook group for language-related humor that often discusses Trump and the intricacies of his language. And though early learners can find comprehension in Trump's repetitive and simple language, advanced learners and native speakers are often confused by his rambling, tangential style, another reason linguists say Trump might not be the best learning tool.


This mind-reading system can correct a robot's error! Latest News & Updates at Daily News & Analysis

#artificialintelligence

A new brain-computer interface developed by scientists can read a person's thoughts in real time to identify when a robot makes a mistake, an advance that may lead to safer self-driving cars. By relying on brain signals called "error-related potentials" (ErrPs) that occur automatically when humans make a mistake or spot someone else making one, the new approach allows even complete novices to control a robot with their minds. This technology developed by researchers at the Boston University and the Massachusetts Institute of Technology (MIT) may offer intuitive and instantaneous ways of communicating with machines, for applications as diverse as supervising factory robots to controlling robotic prostheses. "When humans and robots work together, you basically have to learn the language of the robot, learn a new way to communicate with it, adapt to its interface," said Joseph DelPreto, a PhD candidate at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL).


Reimagining Language Learning with NLP and Reinforcement Learning

#artificialintelligence

With the unlimited supply of natural language data online, and with the advances in Natural Language Processing (NLP) techniques, shouldn't we be able to do something smarter? Actions could include vocabulary reviews, sentence comprehension tasks, or grammar exercises. In an MDP, the policy is defined as maximizing the sum of rewards from some reward function, and by defining that function in the right way we can solve the problem of finding an optimal policy using Reinforcement Learning techniques. If we can accurately model the knowledge of a learner and the knowledge dependencies of text, then this task becomes trivial.