NEW DELHI: Indian companies are shelling out huge premiums for artificial intelligence (AI) talent, as competition intensifies in the job market for a skillset that is hard to find. Everyone from consumer Internet players and technology companies to financial services and automakers is betting big on AI, but the local talent pool for them to tap into is extremely limited. The demand-supply mismatch is driving up salaries. AI professionals are getting 60-80% hikes while switching jobs, compared with an average of 20-30% in other skill areas. Even an entry-level AI role can command a 70%-plus premium over that of a plain vanilla computer science (CS) engineer, say recruitment firms and industry experts.
Researchers at the non-profit AI research group OpenAI just wanted to train their new text generation software to predict the next word in a sentence. It blew away all of their expectations and was so good at mimicking writing by humans they've decided to pump the brakes on the research while they explore the damage it could do. Elon Musk has been clear that he believes artificial intelligence is the "biggest existential threat" to humanity. Musk is one of the primary funders of OpenAI and though he has taken a backseat role at the organization, its researchers appear to share his concerns about opening a Pandora's box of trouble. This week, OpenAI shared a paper covering their latest work on text generation technology but they're deviating from their standard practice of releasing the full research to the public out of fear that it could be abused by bad actors.
The spread of fake news is already a very real problem. Artificial intelligence could make the problem even worse. That prospect is so frightening that an Elon Musk-backed non-profit called OpenAI has decided not to publicly circulate AI-based text generation technology that enables researchers to spin an all-too-convincing--and yes, fabricated--machine-written article. "Due to our concerns about malicious applications of the technology, we are not releasing the trained model," OpenAI blogged. Such concerns go beyond just generating misleading news articles.
Amazon said on Friday it would lead a $700 million investment in U.S. electric pickup truck startup Rivian, in the e-commerce giant's biggest bet on technologies with potential to reshape the automotive sector. The deal represents a major endorsement of Rivian's electric vehicle technology by the world's largest online retailer, which is looking for ways to boost the speed and reduce the cost of its deliveries. Reuters reported on Tuesday that Amazon and General Motors were in talks to invest in Rivian. Amazon is leading a $700 million investment in electric pickup startup Rivian, in the tech giant's biggest bet on technologies with potential to reshape the automotive sector. GM's talks with Rivian about an investment are continuing and any deal would be announced at a later date, people familiar with the talks said on Friday.
A popular dating app has become the latest victim of a major data breach after hackers exposed the details of 6 million of its users. Hacked information of Coffee Meets Bagel users appeared in a huge cache of data that appeared on a popular dark web marketplace earlier this week. The previously undisclosed breach has since been acknowledged by the dating app. Coffee Meets Bagel revealed details about the hack in an email to its users on Valentine's Day, explaining that members' names and email addresses had been exposed. "We recently discovered that some data from your Coffee Meets Bagel account may have been acquired by an unauthorised party," the email stated.
An artificial intelligence project backed by SpaceX founder Elon Musk has been so successful its developers are not releasing it to the public for fear it will be misused. Research group Open AI developed a'large-scale unsupervised language model' that is able to generate news stories from a simple headline. But the group insists it will not be releasing details of the programme and instead has unveiled a much smaller version for research purposes. Its developers claim the technology is poised to rapidly advance in the coming years and the full specification and details of the project will only be released when the negative applications have been discussed by researchers. Elon Musk's AI research group Open AI announced in a paper yesterday that it has generated'a large-scale unsupervised language model' that can write news stories from little more than a headline.
The creators of a revolutionary AI system that can write news stories and works of fiction – dubbed "deepfakes for text" – have taken the unusual step of not releasing their research publicly, for fear of potential misuse. OpenAI, an nonprofit research company backed by Elon Musk, Reid Hoffman, Sam Altman, and others, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough. At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.
Apple's long-rumoured, much-hyped new AirPods are on their way. And they could be as big a change as when the wires dropped out of Apple's existing headphones and they were first born. Introduced in 2016, the two truly wireless earphones haven't been updated since. They came with a wide range of futuristic features – but a range of rumours have swirled around them about new updates, that would bring new and even more futuristic features. The AirPods are actually two (or three) different things: the case that protects them and the earphones that sit inside.
There is widespread public support for a ban on so-called "killer robots", which campaigners say would "cross a moral line" after which it would be difficult to return. Polling across 26 countries found over 60 per cent of the thousands asked opposed lethal autonomous weapons that can kill with no human input, and only around a fifth backed them. The figures showed public support was growing for a treaty to regulate these controversial new technologies - a treaty which is already being pushed by campaigners, scientists and many world leaders. However, a meeting in Geneva at the close of last year ended in a stalemate after nations including the US and Russia indicated they would not support the creation of such a global agreement. Mary Wareham of Human Rights Watch, who coordinates the Campaign to Stop Killer Robots, compared the movement to successful efforts to eradicate landmines from battlefields.
The creators of a revolutionary AI system that can write news stories and works of fiction – dubbed "deepfakes for text" – have taken the unusual step of not releasing their research publicly, for fear of potential misuse. OpenAI, an nonprofit research company backed by Elon Musk, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough. At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.