online


Silencing Malware with AI

#artificialintelligence

Stuart McClure is on a personal mission. After more than two decades in the anti-malware industry, he firmly believes that ninety percent of malware attacks today can be prevented by not clicking on this, not clicking on that, and not opening that attachment either. While he's not the first nor alone in suggesting the user bears at least some responsibility, the anti-malware industry up until now hasn't yet produced an effective alternative to signature-based solutions based on known attacks. McClure's company, Cylance, thinks it has the answer with its first-generation AI-driven anti-malware products for both enterprises and consumers. "Why couldn't we simply train a computer to think like a cybersecurity professional to know what to do and not to do based on the characteristics and features of known attacks?" asked McClure.


How Edtech Startups Are Changing The Face Of Education In India

#artificialintelligence

The landscape of formal education in India is based on a relatively archaic model. Over the last 150 years, not much has evolved. The students still attend brick-and-mortar establishments for schools in order to educate themselves. The system is largely exam-driven, theoretical and impractical. The emphasis is on scoring rather than learning and subsequent application of the knowledge.


Mum and Dad are our biggest security risk! - IoT global network

#artificialintelligence

Your mother's maiden name – the name of your first pet – the city you were born in. What do these all have in common? Not only are they popular security questions for online authentication but given the culture and our tendency to overshare on social media, they are no longer the most "secure" security questions. With sites like Facebook growing in popularity with the over 55s, are our loving parents actually the weakest line of defence in protecting our digital identities? If they insist on posting personal information on our behalf, says Callsign CMO and go-to-market strategy head Sarah Whipp, have we exposed ourselves more than we realise?


AI proves 'too good' at writing fake news, held back by researchers

#artificialintelligence

The organization created a machine learning algorithm, GPT-2, that can produce natural-looking language largely indistinguishable from that of a human writer while largely "unsupervised" – it needs only a small prompt text to provide the subject and context for the task. The team have made some strides toward this lofty goal, but have also somewhat inadvertently admitted that, once perfected, the device can mass-produce fake news on an unprecedented scale. "We have observed various failure modes," the team observed. "Such as repetitive text, world modelling failures (eg the model sometimes writes about fires happening under water), and unnatural topic switching." Here's a short story i generated using OpenAI's GPT-2 tool (prompt in bold) pic.twitter.com/DGIVwGuAUV


Never get catfished again: Researchers develop AI that detects fake profiles on popular dating apps

Daily Mail

Scientists have developed an algorithm that can spot dating scams. A team of researchers trained AI software to'think like humans' when looking for fake dating profiles. While the algorithm has only been deployed in a research setting, it could one day be used to protect users on popular dating services like Tinder and Match.com. Scientists have developed an algorithm that can spot dating scams. A team of researchers trained AI software to'think like humans' when looking for fake dating profiles Romance scams, where criminals create phony profiles to trick love-lusting victims into sending them money, are on the rise.


Hacks, Nudes, and Breaches: It's Been a Rough Month for Dating Apps

WIRED

Dating is hard enough without the added stress of worrying about your digital safety online. But social media and dating apps are pretty inevitably involved in romance these days--which makes it a shame that so many of them have had security lapses in such a short amount of time. Within days of each other this week, the dating apps OkCupid, Coffee Meets Bagel, and Jack'd all disclosed an array of security incidents that serve as a grave reminder of the stakes on digital profiles that both store your personal information and introduce you to total strangers. "Dating sites are designed by default to share a ton of information about you; however, there's a limit to what should be shared," says David Kennedy, CEO of the threat tracking firm Binary Defense Systems. "And often times these dating sites provide little to no security, as we have seen with breaches going back several years from these sites."



Too scary? Elon Musk's OpenAI company won't release tech that can generate fake news

USATODAY

The spread of fake news is already a very real problem. Artificial intelligence could make the problem even worse. That prospect is so frightening that an Elon Musk-backed non-profit called OpenAI has decided not to publicly circulate AI-based text generation technology that enables researchers to spin an all-too-convincing--and yes, fabricated--machine-written article. "Due to our concerns about malicious applications of the technology, we are not releasing the trained model," OpenAI blogged. Such concerns go beyond just generating misleading news articles.


Controlling False Discoveries in Large-Scale Experimentation: Challenges and Solutions

#artificialintelligence

"Scientific research has changed the world. Now it needs to change itself." There has been a growing concern about the validity of scientific findings. A multitude of journals, papers and reports have recognized the ever smaller number of replicable scientific studies. In 2016, one of the giants of scientific publishing, Nature, surveyed about 1,500 researchers across many different disciplines, asking for their stand on the status of reproducibility in their area of research.


Personal Data Collection: The Complete Wired Guide

WIRED

On the internet, the personal data users give away for free is transformed into a precious commodity. The puppy photos people upload train machines to be smarter. The questions they ask Google uncover humanity's deepest prejudices. And their location histories tell investors which stores attract the most shoppers. Even seemingly benign activities, like staying in and watching a movie, generate mountains of information, treasure to be scooped up later by businesses of all kinds. Personal data is often compared to oil--it powers today's most profitable corporations, just like fossil fuels energized those of the past. But the consumers it's extracted from often know little about how much of their information is collected, who gets to look at it, and what it's worth. Every day, hundreds of companies you may not even know exist gather facts about you, some more intimate than others.