Some Uber drivers in New York City want to see a decrease in the commission taken by the company. SAN FRANCISCO -- Gig economy workers are increasingly ubiquitous, shuttling us to appointments and delivering our food while working for Uber, Lyft, DoorDash and others. Thanks in large part to the app-based tech boom emanating from this city, 36% of U.S. workers participate in the gig economy, according to Gallup. But not all gigs are created equal, Gallup adds, noting that so-called "contingent gig workers" experience their workplace "like regular employees do, just without the benefits of a traditional job -- benefits, pay and security." California lawmakers are weighing what is considered a pro-worker bill that, if passed into law, would set a national precedent that fundamentally redefines the relationship between worker and boss by forcing corporations to pay up.
We hear a lot about AI and its transformative potential. What that means for the future of humanity, however, is not altogether clear. Some futurists believe life will be improved, while others think it is under serious threat. Here's a range of takes from 11 experts. Join nearly 200,000 subscribers who receive actionable tech insights from Techopedia.
Artificial intelligence (AI) is rapidly finding applications in nearly every walk of life. Self-driving cars, social media networks, cybersecurity companies, and everything in between uses it. But a new report published by the SHERPA consortium – an EU project studying the impact of AI on ethics and human rights – finds that while human attackers have access to machine learning techniques, they currently focus most of their efforts on manipulating existing AI systems for malicious purposes instead of creating new attacks that would use machine learning. The study's primary focus is on how malicious actors can abuse AI, machine learning, and smart information systems. The researchers identify a variety of potentially malicious uses for AI that are well within reach of today's attackers, including the creation of sophisticated disinformation and social engineering campaigns.
Elon Musk's secretive "brain-machine interface" startup, Neuralink, stepped out of the shadows on Tuesday evening, revealing its progress in creating a wireless implantable device that can – theoretically – read your mind. At an event at the California Academy of Sciences in San Francisco, Musk touted the startup's achievements since he founded it in 2017 with the goal of staving off what he considers to be an "existential threat": artificial intelligence (AI) surpassing human intelligence. Two years later, Neuralink claims to have achieved major advances toward Musk's goal of having human and machine intelligence work in "symbiosis". Neurolink says it has designed very small "threads" – smaller than a human hair – that can be injected into the brain to detect the activity of neurons. It also says it has developed a robot to insert those threads in the brain, under the direction of a neurosurgeon.
Summary: Despite our concerns about China taking the lead in AI, our own government efforts mostly through DARPA continue powerful leadership and funding to maintain our lead. Here's their plan to maintain that lead over the next decade. Think all those great ideas that have powered AI/ML for the last 10 years came from Silicon Valley and a few universities? Hard as it may be to admit it's the seed money in the billions that our government has spent that got pretty much all of these breakthroughs to the doorway of commercial acceptability. Dozens of articles bemoan the huge investments that China is making in AI with the threat that they will pull ahead.
According to the new market research report "Artificial Intelligence Market by Offering (Hardware, Software, Services), Technology (Machine Learning, Natural Language Processing, Context-Aware Computing, Computer Vision), End-User Industry, and Geography - Global Forecast to 2025", published by MarketsandMarkets, the Artificial Intelligence Market is expected to be valued at USD 21.5 billion in 2018 and is likely to reach USD 190.6 billion by 2025, at a CAGR of 36.6% during the forecast period. Major drivers for the market are growing big data, the increasing adoption of cloud-based applications and services, and an increase in demand for intelligent virtual assistants. The major restraint for the market is the limited number of AI technology experts. Critical challenges facing the AI market include concerns regarding data privacy and the unreliability of AI algorithms. Underlying opportunities in the artificial intelligence market include improving operational efficiency in the manufacturing industry and the adoption of AI to improve customer service.
Companies and public sector organisations say they have no choice but to automate their cyber defences as hacking become increasingly sophisticated. Security professionals can no longer keep pace with the volume and sophistication of attacks on computer systems. In a study of 850 security professionals across 10 countries, more than half said their organisations are overwhelmed with data. So they are turning to machine-learning technologies that can identify cyber attacks by analysing huge quantities of network data and have the potential to block attacks automatically. By 2020, two out of three companies plan to deploy cyber security defences incorporating machine learning and other forms of artificial intelligence (AI), according to the Capgemini study, Reinventing cyber security with artificial intelligence.
Elon Musk, the futurist billionaire behind SpaceX and Tesla, outlined his plans to connect humans' brains directly to computers on Tuesday night, describing a campaign to create "symbiosis with artificial intelligence." He said the first prototype could be implanted in a person by the end of next year. Arriving at that goal "will take a long time," Musk said in a presentation at the California Academy of Sciences in San Francisco, noting that securing federal approval for implanted neural devices is difficult. But testing on animals is already underway, and "a monkey has been able to control the computer with his brain," he said. Musk founded Neuralink Corp. in July 2016 to create "ultra-high bandwidth brain-machine interfaces to connect humans and computers."
When Mark Zuckerberg told Congress Facebook would use artificial intelligence to detect fake news posted on the social media site, he wasn't particularly specific about what that meant. Given my own work using image and video analytics, I suggest the company should be careful. Despite some basic potential flaws, AI can be a useful tool for spotting online propaganda – but it can also be startlingly good at creating misleading material. Researchers already know that online fake news spreads much more quickly and more widely than real news. My research has similarly found that online posts with fake medical information get more views, comments and likes than those with accurate medical content.
Inherently biased artificial intelligence programs can pose serious problems for cybersecurity at a time when hackers are becoming more sophisticated in their attacks, experts told CNBC. Bias can occur in three areas -- the program, the data and the people who design those AI systems, according to Aarti Borkar, a vice president at IBM Security. "One is the algorithm itself," she told CNBC, referring to the lines of codes that teach an AI program to carry out specific tasks. "Is it biased in the way it's approached, and the outcome it's trying to solve?" A biased program may end up focusing on the wrong priorities and could miss the real threats, she explained.