The controversial facial recognition company Clearview AI says it will stop providing private entities with its technology. According to legal documents first reported by Buzzfeed, the company is ending non-government related contracts in response to class-action lawsuits and scrutiny from regulators. The court documents suggest that Clearview is voluntarily avoiding'transacting with non-governmental customers anywhere.' 'Clearview is cancelling the accounts of every customer who was not either associated with law enforcement or some other federal, state, or local government department, office, or agency,' the company said in a filing Buzzfeed reports that the lawsuit from which the documents stem relate to the companies use of biometric data that is being heard in a being heard in an Illinois federal court. The documents also show that Clearview will cease its contracts with all entities in Illinois as part of the lawsuit.
Startup Clearview AI has built a facial recognition system that claims to be able to ID people in real-time, matching them with billions of images pulled from databases and scraped from social media. Earlier this year, a list containing the names of private companies using or possibly interested in using the technology leaked out as regulators began to scrutinize the outfit, and people filed lawsuits. According to Buzzfeed News, Clearview AI said in a filing that "Clearview is cancelling the accounts of every customer who was not either associated with law enforcement or some other federal, state, or local government department, office, or agency," and cancelling the accounts of all entities in Illinois. It's being sued for allegedly breaking a state law concerning the use of biometric information by scraping images from the plaintiff's social media accounts to train its algorithm. The leak listed companies like Best Buy and Macy's as clients, showing how far-reaching the surveillance tech could become.
"Our greatest assets are our team members, and we are committed to continually improving their lives. Whether investing in leadership initiatives, or improving our facilities, we believe the only way you can create a world-class customer experience is by first creating a world-class team member experience." Preface: To tee up the new item produced by Clayton Homes that follows below, some background is useful. First, some related background, then the new items from Clayton. An independent of Clayton Homes that stopped selling their HUD Code manufactured homes some time ago reminded MHProNews about claims that after Warren Buffett bought their brand, they tried cutting the pay of retail general managers.
Video conference app Zoom illegally shared personal data with Facebook, even if users did not have a Facebook account, a lawsuit claims. The app has experienced a surge in popularity as millions of people around the world are forced to work from home as part of coronavirus containment measures. The lawsuit, which was filed in a California federal court on Monday, states that the company failed to inform users that their data was being sent to Facebook "and possibly other third parties". It states: "Had Zoom informed its users that it would use inadequate security measures and permit unauthorised third-party tracking of their personal information, users... would not have been willing to use the Zoom App." The allegations come amid a flurry of questions surrounding Zoom's privacy policies, with the Electronic Frontier Foundation recently warning that the app allows administrators to track the activities of attendees.
Companies using artificial intelligence (AI) across their business units should consider creating a C-suite position to oversee how AI is used and guard against the risk of making bad decisions based on biased algorithms, experts say. Only a few companies, like Levi Strauss & Co, have established a chief artificial intelligence officer (CAIO) position, and fewer have created a C-level position dedicated solely to AI ethics. Brian Kropp, chief of research in the HR practice at Gartner, said chief technology officers and chief information officers will struggle with handling AI-related decisions and ethical dilemmas. "CTOs and CIOs are going to be thinking about the role through the lens of how they can make the technology work," Kropp said. However, "artificial intelligence is not a question of how you get the technology to work; it's a question of how do you think through the implications of the technology?"
With the news that Data Airline is filing a lawsuit against its chatbot provider, among endless IT breaches and disasters, the reality is now starkly clear that chatbots need to be secure and well-managed to protect the business and customers. The cloud is so easy and seductive, sign up for a service, create something amazing and off you go. That flexibility and access has been a huge boon, driving startups and helping departments get ahead of their plodding IT departments. However, in the charge to cool AI and chatbot products, or using the cloud for storage and third-party solutions, the need for cast-iron security becomes all the greater, and most businesses lack the expertise to manage that facet. This issue was brought to light by US airline Delta filing suit against 7ai,claiming it lacked the proper security procedures for the product, allowing hackers to alter the chatbot's source code.
A quick search of recent headlines and blog posts suggests there is anxiety surrounding artificial intelligence (AI). One article shouts, "Robots will soon do your Taxes!" Another reads, "Lawyers could be the next profession to be replaced by computers." Those of us involved in technology marketing strategy and communications are struggling to understand what the true impact of AI will be on our respective companies and clients, and on the technology-based products and services they provide. New AI applications in legal research, contracts management, or e-discovery may fundamentally change the value proposition. For those AI solutions, marketers and communications teams must strive to effectively educate prospects and customers on the nature of artificial intelligence, separating the rumors from facts.
There is no denying that in the court of law, any mistake in communication can end up turning a case over its head. When there are inaccuracies with the recording of statements and audio files, it can be quite easy for a case to become more confusing than it should be. The worst part is that a case could end up punishing the wrong people. In an industry that often deals with life and death, it is no wonder why legal transcriptionists are highly valued. There are even advancements being made where AI is utilized to take over in courtroom transcription.
Companies looking for ways to cut costs as they brace for a coronavirus-induced economic slowdown should consider their patent portfolio. It's like a cupboard in desperate need of a spring clean. Businesses spend over $40bn on maintaining their patent portfolio each year, according to a new study from the UK intellectual property (IP) startup Aistemos and media platform IAM, but less than 20% of companies believe they have the right portfolio. Large companies hold tens of thousands of patents aimed at protecting the company's business from copying and legal issues. Around 4.5% of a company's revenues, on average, are vulnerable to patent litigation, according to the US consultancy Analysis Group.
If you're not concerned about the potential legal liability from using AI, then you're not paying attention. That's the message from Andrew Burt, one of the founders of BNH.ai, a boutique law firm that's dedicated to advising clients on the legal pitfalls of embracing AI. BNH.ai has been up and running for just over a month. But today is its official launch day for the Washington D.C. law firm, which was co-founded by Burt, the chief legal counsel for Immuta, and Patrick Hall, the head of product for H2O.ai. Both will continue in their existing roles at Immuta and H2O while growing their new law firm.