Goto

Collaborating Authors

regulation


Gatefy: anti-spam and anti-phishing solution for your business

#artificialintelligence

If your company is looking for an anti-spam and anti-phishing solution, Gatefy will solve your problem. Gatefy Email Security (GES) is a solution that protects your company against different types of email threats, such as spam, phishing, ransomware, virus, BEC (Business Email Compromise), and social engineering. GES is compatible with several email providers, such as Office 365, G Suite, Exchange, and Zimbra. In practice, it adds an advanced layer of protection to your line of defense, offering great value for money. As we're talking about a security and data protection tool, Gatefy anti-spam and anti-phishing solution also helps your company to comply with laws and regulations, as is the case with LGPD in Brazil, GDPR in Europe, and CCPA in California. Email is the primary vector used by hackers to compromise companies.


Our goal shouldn't be to build merely 'trustworthy' AI

#artificialintelligence

Did you know Mariarosaria Taddeo, the Deputy Director of the Oxford Internet Institute's Digital Ethics Lab, is speaking at TNW2020 this year? Check out her session on'Shaping the future of AI: International policy outlook' here. Artificial intelligence is increasingly affecting our everyday lives. The field has the potential to make the world a healthier, wealthier, and more efficient place. But it also poses vast safety and security risks.


The technology that powers the 2020 campaigns, explained

MIT Technology Review

Campaigns and elections have always been about data--underneath the empathetic promises to fix your problems and fight for your family, it's a business of metrics. If a campaign is lucky, it will find its way through a wilderness of polling, voter attributes, demographics, turnout, impressions, gerrymandering, and ad buys to connect with voters in a way that moves or even inspires them. Obama, MAGA, AOC--all have had some of that special sauce. Still, campaigns that collect and use the numbers best win. That's been true for some time, of course.


Why AI is essential for controllers

#artificialintelligence

Artificial Intelligence is a hot topic. Applications based on machine learning make the news on a near-daily basis. Smart police cameras steered by algorithms can register drivers holding cell phones with great precision. Algorithms can dynamically determine the real-time prices for taxi rides, hotel rooms, airplane seats, and so on. High-frequency traders are getting rich as they sleep by letting their secret algorithms do the work.


AI, Protests, and Justice

#artificialintelligence

Editor's Note: The use of face recognition technology in policing has been a long-standing subject of concern, even more-so now after the murder of George Floyd and the demonstrations that have followed. In this article, Mike Loukides, VP of Content Strategy at O'Reilly Media, reviews how companies and cities have addressed these concerns, as well as ways in which individuals can mitigate face recognition technology or even use it to increase accountability. We'd love to hear from you about what you think about this piece. Largely on the impetus of the Black Lives Matter movement, the public's response to the murder of George Floyd, and the subsequent demonstrations, we've seen increased concern about the use of facial identification in policing. First, in a highly publicized wave of announcements, IBM, Microsoft, and Amazon have announced that they will not sell face recognition technology to police forces.


To simplify AI regulation, use the GDPR's high-risk criteria

#artificialintelligence

First, the two cumulative criteria proposed by the Commission will inevitably be incomplete, leaving some applications out. That's the tradeoff for simple rules – they miss the mark in a small but significant number of cases. To work properly, simple rules must be supplemented by a general catch-all category for other high-risk applications that would not qualify under the two-criteria test. If you add a catch-all test (which would be necessary in our view), the goal of legal certainty would be largely defeated. Second, the "high risk" criterion will interfere with other legal concepts and thresholds that already apply to AI applications.


GSA adds machine learning support for agency regulatory reviews - FedScoop

#artificialintelligence

The General Services Administration is modernizing how agencies review regulations using machine learning (ML), in a procurement through its Centers of Excellence (CoE) initiative. GSA awarded a $9.9 million contract to Deloitte and to Esper, Inc. for ML support for agencies. ML can review rules and regulations to identify trends in the data, which can help eliminate redundancies and streamline the process of writing new ones. Both the CoEs within GSA's Technology Transformation Services and the Federal Systems Integration and Management Center (FEDSIM) have used ML, a subset of artificial intelligence, to conduct regulatory reviews. The contract extends their work to CoE partner agencies.


Is AI an Existential Threat?

#artificialintelligence

When discussing Artificial Intelligence (AI), a common debate is whether AI is an existential threat. The answer requires understanding the technology behind Machine Learning (ML), and recognizing that humans have the tendency to anthropomorphize. We will explore two different types of AI, Artificial Narrow Intelligence (ANI) which is available now and is cause for concern, and the threat which is most commonly associated with apocalyptic renditions of AI which is Artificial General Intelligence (AGI). To understand what ANI is you simply need to understand that every single AI application that is currently available is a form of ANI. These are fields of AI which have a narrow field of specialty, for example autonomous vehicles use AI which is designed with the sole purpose of moving a vehicle from point A to B. Another type of ANI might be a chess program which is optimized to play chess, and even if the chess program continuously improves itself by using reinforcement learning, the chess program will never be able to operate an autonomous vehicle.


Artificial Intelligence and Consumer Protection

#artificialintelligence

AI-based applications raise new, so far unresolved legal questions, and consumer law is no exception. The use of self-learning algorithms in Big Data analysis gives companies the opportunity to gain a detailed, individual insight into the customer's personal circumstances, behavior patterns and personality. On this basis, companies can tailor their advertising, but also their prices and contract terms, to the respective customer profile and – drawing on the findings of behavioral economics – exploit the consumer's biases and/or her willingness to pay. AI-based insights can also be used for scoring systems to decide whether a specific consumer can purchase a product or take up a service. The use of AI in consumer markets thus lead to a new form of power and information asymmetry.


AI Summit 2020: Regulating AI for the common good

#artificialintelligence

Artificial intelligence requires carefully considered regulation to ensure technologies balance cooperation and competition for the greater good, according to expert speakers at the AI Summit 2020. As a general purpose technology, artificial intelligence (AI) can be used in a staggering array of contexts, with many advocates framing its rapid development as a cooperative endeavour for the benefit of all humanity. The United Nations, for example, launched it's AI for Good initiative in 2017, while the French and Chinese governments talk of "AI for Humanity" and "AI for the benefit of mankind" respectively – rhetoric echoed by many other governments and supra-national bodies across the world. On the other hand, these same advocates also use language and rhetoric that emphasises the competitive advantages AI could bring in the more narrow pursuit of national interest. "Just as in international politics, there's a tension between an agreed aspiration to build AI for humanity, and for the common good, and the more selfish and narrow drive to compete to have advantage," said Allan Dafoe, director of the Centre for the Governance of AI at Oxford University, speaking at the AI Summit, which took place online this week.