Workers at Amazon have demanded that their employer stop the sale of facial recognition software and other services to the US government. In a letter addressed to Amazon CEO Jeff Bezos and posted on the company's internal wiki, employees said that they "refuse to contribute to tools that violate human rights," citing the mistreatment of refugees and immigrants by ICE and the targeting of black activists by law enforcement. The letter follows similar protests at Google and Microsoft. "As ethically concerned Amazonians, we demand a choice in what we build, and a say in how it is used," says the letter, first reported by The Hill. The employees (it's not clear how many signed the letter) refer to the sale of computer services by IBM to the Nazis as a worrying parallel.
If you're like most litigators, knowing which judge is hearing your case can make a difference in your litigation strategy. It used to be that if it was a judge you or other attorneys in your firm had appeared before in the past, you could rely on your cumulative previous experience and build an argument that appealed to the judge's tendencies. But if it was a judge you were unfamiliar with, you were kind of in the dark. But AI has changed that. By accessing dockets on legal research sites, you can find just about every filing, motion, and ruling on record from a judge.
Smart wristbands, wireless sensing systems, and ultra-efficient solar cells – a glance through the list of winning projects from this year's ExploraVision competition might sound a lot like roll call at the Consumer Electronics Show. But, the concepts that claimed the top prizes aren't coming from tech's biggest names, or even the latest startups to break out of Silicon Valley. They're all, essentially, created by kids. 'Quite often young kids have these ideas for medical advancements, and it's out of empathy or feelings for someone they know – a friend of theirs, a family member who has some medical condition that they develop a device or system that would address it,' Nye said The Toshiba-backed initiative announced the winners of its annual k-12 science competition earlier this month, revealing the groundbreaking prototypes that seek to bring answers to our everyday problems. With solutions for everything from current medical failings to electrical grid woes, it's no wonder the student projects have already begun to capture industry attention.
A quiet wager has taken hold among researchers who study artificial intelligence techniques and the societal impacts of such technologies. They're betting whether or not someone will create a so-called Deepfake video about a political candidate that receives more than 2 million views before getting debunked by the end of 2018. The actual stakes in the bet are fairly small: Manhattan cocktails as a reward for the "yes" camp and tropical tiki drinks for the "no" camp. But the implications of the technology behind the bet's premise could potentially reshape governments and undermine societal trust in the idea of having shared facts. It all comes down to when the technology may mature enough to digitally create fake but believable videos of politicians and celebrities saying or doing things that never actually happened in real life.
In Part 2 of this article on the legal landscape related to smart manufacturing, we take a look at legal issues surrounding data ownership, data privacy, as well as the implications of artificial intelligence, which is rapidly making inroads to the manufacturing arena. The inspiration for two blog posts (Part 1 and this Part 2) on a much less discussed aspect of Industry 4.0 comes from last month's Smart Manufacturing, 3D Printing & Industry 4.0 Forum event in Singapore, where ARC Advisory Group chaired a panel, Future of Manufacturing: The Emerging Legal Challenges, comprising lawyers Matt Pollins, partner and head of commercial/TMT at legal firm CMS Singapore and Wong Hong Boon, manufacturing and supply chain legal counsel at 3M Singapore, along with Ani Bhalekar, Head of IoT/Industry X.0 & Mobile Practices for ASEAN, Accenture; CK Vishwakarma. What follows are the questions posed by ARC to the panel in the second half of the session and a summary of the subsequent discussions. The straight legal answer is that there are question marks over whether you can really own data as an intellectual property. So that means because data is not easy to own, you need to get it nailed down in the contract between the customer and the vendors on who can control it.
While picture editors have tweaked images for decades, modern tools like Adobe Photoshop let them alter photos to the point of complete fabrication. Think of sharks swimming in the streets of New Jersey after Hurricane Sandy, or someone flying a "where's my damn dinner?" banner over a women's march. Those images were fake, but clever manipulation can trick news outlets and social media users into thinking they're real. By the time we figure out that they're phony, bombastic pictures can go viral and it's nearly impossible to let everyone know the image they shared is a sham. Adobe, certainly aware of how complicit its software is in the creation of fake news images, is working on artificial intelligence that can spot the markers of phony photos.
Amazon workers have written to CEO Jeff Bezos in protest of the company selling facial recognition tools and other technology to police departments and government agencies. The workers cite the use of Amazon technology by the US Department of Homeland Security and the Immigration and Customs Enforcement (ICE) agency, which have been criticised for enforcing President Donald Trump's "zero tolerance" policy that has seen parents separated from their children at the US border. "As ethically concerned Amazonians, we demand a choice in what we build, and a say in how it is used. We learn from history, and we understand how IBM's systems were employed in the 1940s to help Hitler," the letter states. "IBM did not take responsibility then, and by the time their role was understood, it was too late.
Many insurers are investing in AI beyond MLwhich is one of its subfield. Opportunities range from an enhanced Customer Experience (reduced cycle time, personalized advisors through chatbots, fast track Claims management), to productivity efficiency, pricing sophistication, churn risk anticipation and accurate Fraud detection patterns. Insurers can either build internal capabilities, partner with start-ups on these fields or do both to accelerate time to market impacts. AI is a great enabler. Nevertheless, the right balance between human contacts and AI is key.
Amazon's operation has grown well beyond merely delivering items to people's homes. Jeff Bezos's massive corporation is now involved in everything from grocery shopping to fashion, but the recent revelation that Amazon technology assists law enforcement is a bridge too far for some employees. A group of Amazon employees (referred to as Amazonians) penned a letter to Bezos on Thursday asking the billionaire CEO to halt the sale of facial recognition technology to law enforcement agencies, The Hill reported. The software, called Amazon Web Services Rekognition, has been linked to government agencies like the controversial Immigration and Customs Enforcement, or ICE. The letter cited the United States government's history of injustice towards minorities in calling for Amazon to stop assisting ICE.
The rise of big data and the subsequent centrality of analytics has started to have an impact on nearly every industry and occupation in the globe – a trend that is only set to continue. With developing technology and more data to fuel analytics engines, we are probably going to find smart data tools everywhere – even in the court room. It may seem a little alien at first, but analytics and AI are already helping lawyers to do their job more effectively. Here is a look at early indications of just how useful it might be to use advanced data technology in the court room. Unlike glamorous TV shows of unforgettable court room battles, the reality of the law is that lawyers spend most their time reading.