Goto

Collaborating Authors

Results


Building Transparency Into AI Projects - AI Summary

#artificialintelligence

That means communicating why an AI solution was chosen, how it was designed and developed, on what grounds it was deployed, how it's monitored and updated, and the conditions under which it may be retired. There are four specific effects of building in transparency: 1) it decreases the risk of error and misuse, 2) it distributes responsibility, 3) it enables internal and external oversight, and 4) it expresses respect for people. In 2018, one of the largest tech companies in the world premiered an AI that called restaurants and impersonated a human to make reservations. To "prove" it was human, the company trained the AI to insert "umms" and "ahhs" into its request: for instance, "When would I like the reservation? If the product team doesn't explain how to properly handle the outputs of the model, introducing AI can be counterproductive in high-stakes situations. In designing the model, the data scientists reasonably thought that erroneously marking an x-ray as negative when in fact, the x-ray does show a cancerous tumor can have very dangerous consequences and so they set a low tolerance for false negatives and, thus, a high tolerance for false positives. Had they been properly informed -- had the design decision been made transparent to the end-user -- the radiologists may have thought, I really don't see anything here and I know the AI is overly sensitive, so I'm going to move on. By being transparent from start to finish, genuine accountability can be distributed among all as they are given the knowledge they need to make responsible decisions. Consider, for instance, a financial advisor who hides the existence of some investment opportunities and emphasizes the potential upsides of others because he gets a larger commission when he sells the latter. The more general point is that AI can undermine people's autonomy -- their ability to see the options available to them and to choose among them without undue influence or manipulation. That means communicating why an AI solution was chosen, how it was designed and developed, on what grounds it was deployed, how it's monitored and updated, and the conditions under which it may be retired. There are four specific effects of building in transparency: 1) it decreases the risk of error and misuse, 2) it distributes responsibility, 3) it enables internal and external oversight, and 4) it expresses respect for people. In 2018, one of the largest tech companies in the world premiered an AI that called restaurants and impersonated a human to make reservations. To "prove" it was human, the company trained the AI to insert "umms" and "ahhs" into its request: for instance, "When would I like the reservation?


Python: Confusion Matrix

#artificialintelligence

A confusion matrix is a supervised machine learning evaluation tool that provides more insight into the overall effectiveness of a machine learning classifier. Unlike a simple accuracy metric, which is calculated by dividing the number of correctly predicted records by the total number of records, confusion matrices return 4 unique metrics for you to work with. While I am not saying accuracy is always misleading, there are times, especially when working with examples of imbalanced data, that accuracy can be all but useless. Let's consider credit card fraud. It is not uncommon that given a list of credit card transactions, that a fraud event might make up a little as 1 in 10,000 records.


Identifying Cyber Threats Before They Happen: Deep Learning

#artificialintelligence

Crypto.com, Microsoft, NVidia, and Okta all got hacked this year. In some hacks, attackers are looking to take data, while some are just trying things out. Either way, it is in the interest of companies to patch up the holes in their security systems as more attackers are learning to take advantage of them. The project I am working on now is one to prevent cyber threats like these from happening. When a company is hacked, there is a lot at stake.


Reco raises $30M to prevent sensitive data leaks – TechCrunch

#artificialintelligence

Reco, a company using AI to map a company's data sharing, today announced that it raised $30 million in a Series A round led by Insight Partners, with participation from Zeev Ventures, BoldStart, Angular Ventures, Jibe Ventures, CrewCapital and Cyber Club London. CEO Ofer Klein said the proceeds will to toward product development and supporting the company' go-to-market efforts. Reco is Klein's second venture after Kwik, an internet of things platform for "connected customer experiences." Nakash led research at the Office of the Prime Minister in Israel prior to joining Reco, while Shapira, who also worked at the Office of the Prime Minister, was the head of algorithms at heads-up display startup Guardian Optical Technologies. "The distributed workforce is getting bigger. And each of these introduces new security risk," Klein told TechCrunch in an email interview.


Demystifying MILKit? (Part 1)

#artificialintelligence

As we begin to grow the MILKit community organically, it's important to keep the marketing message clear and accurate. M.I.L.K. an acronym for Machine Intelligence Launch Knowledge, which describes our machine learning objective. The utility we're building for the crypto community is unique. Not just another DEX, Swap, P2E metaverse game but a vitally important utility to help protect people from scams, rug-pulls, honeypots and other blockchain hazards. This article will be a living document, which will be updated to include answers to questions that arise from the community.


Investors in gun-detection tech tested at NYC City Hall donated to mayor's PAC

Engadget

Earlier this year, New York City started testing a gun detection system from Evolv Technologies at City Hall and Jacobi Medical Center in the Bronx. Mayor Eric Adams, who has said he came across the system on the internet, has been talking up the tech for months as a way to help combat gun violence. Now, it has emerged that two people who donated $1 million to support Adams' mayoral run work at companies with investments in Evolv, as the New York Daily News first reported. The CEO of the investment firm Citadel, Kenneth Griffin, last year donated $750,000 to Strong Leadership NYC, a political action committee (PAC) that supported Adams. Jane Street Financial Services founder Robert Granieri gave $250,000, according to records. As of May 16th, Citadel held 12,975 shares in Evolv, a publicly traded company.


Top 10 Synthetic Data StartupsMaking a Mark in the Tech Sphere

#artificialintelligence

Designing good data-driven models hugely depends on the quality of data. Well, data is a set of numbers, and shouldn't bother the developers much. As they say, the devil lies in the details, real data comes with a set of issues like imbalanced classes, inherent biases, unstructured values, etc. On the other hand, synthetic data provides the developers with the flexibility of scalability of data and freedom from biases, opening a whole lot of possibilities for creating a model that doesn't exist in the real world. In addition, synthetic data holds the benefits of protecting user data privacy all while giving the freedom to experiment with.


Forecasting Recessions With Scikit-Learn

#artificialintelligence

It is no secret that everybody wants to predict recessions. Many economists and finance firms have attempted this with limited success, but by and large there are several well known leading indicators for recessions in the US economy. However, when presented to the general public these indicators are typically taken alone, and are not framed in a way that can give probability statements associated with an upcoming recession. In this project, I have taken several of those economic indicators and built a classification model to generate probabilistic statements. Here, the actual classification ('recession' or'no recession') is not as important as the probability of a recession, since this probability will be used to determine a basic portfolio scheme which I will describe later on.


Combinatorial PurgedKFold Cross-Validation for Deep Reinforcement Learning

#artificialintelligence

Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. This article is written by Berend Gort & Bruce Yang, core team members of the Open-Source project AI4Finance. This project is an open-source community sharing AI tools for finance, and a part of the Columbia University in New York. Our previous article described the Combinatorial PurgedKFold Cross-Validation method in detail for classifiers (or regressors) with regular predictions.


State of AI Ethics Report (Volume 6, February 2022)

arXiv.org Artificial Intelligence

This report from the Montreal AI Ethics Institute (MAIEI) covers the most salient progress in research and reporting over the second half of 2021 in the field of AI ethics. Particular emphasis is placed on an "Analysis of the AI Ecosystem", "Privacy", "Bias", "Social Media and Problematic Information", "AI Design and Governance", "Laws and Regulations", "Trends", and other areas covered in the "Outside the Boxes" section. The two AI spotlights feature application pieces on "Constructing and Deconstructing Gender with AI-Generated Art" as well as "Will an Artificial Intellichef be Cooking Your Next Meal at a Michelin Star Restaurant?". Given MAIEI's mission to democratize AI, submissions from external collaborators have featured, such as pieces on the "Challenges of AI Development in Vietnam: Funding, Talent and Ethics" and using "Representation and Imagination for Preventing AI Harms". The report is a comprehensive overview of what the key issues in the field of AI ethics were in 2021, what trends are emergent, what gaps exist, and a peek into what to expect from the field of AI ethics in 2022. It is a resource for researchers and practitioners alike in the field to set their research and development agendas to make contributions to the field of AI ethics.