Goto

Collaborating Authors

 trusted




Can Sam Altman Be Trusted with the Future?

The New Yorker

In 2017, soon after Google researchers invented a new kind of neural network called a transformer, a young OpenAI engineer named Alec Radford began experimenting with it. What made the transformer architecture different from that of existing A.I. systems was that it could ingest and make connections among larger volumes of text, and Radford decided to train his model on a database of seven thousand unpublished English-language books--romance, adventure, speculative tales, the full range of human fantasy and invention. Then, instead of asking the network to translate text, as Google's researchers had done, he prompted it to predict the most probable next word in a sentence. The machine responded: one word, then another, and another--each new term inferred from the patterns buried in those seven thousand books. Radford hadn't given it rules of grammar or a copy of Strunk and White.


Can Features for Phishing URL Detection Be Trusted Across Diverse Datasets? A Case Study with Explainable AI

Mia, Maraz, Derakhshan, Darius, Pritom, Mir Mehedi A.

arXiv.org Artificial Intelligence

Phishing has been a prevalent cyber threat that manipulates users into revealing sensitive private information through deceptive tactics, designed to masquerade as trustworthy entities. Over the years, proactively detection of phishing URLs (or websites) has been established as an widely-accepted defense approach. In literature, we often find supervised Machine Learning (ML) models with highly competitive performance for detecting phishing websites based on the extracted features from both phishing and benign (i.e., legitimate) websites. However, it is still unclear if these features or indicators are dependent on a particular dataset or they are generalized for overall phishing detection. In this paper, we delve deeper into this issue by analyzing two publicly available phishing URL datasets, where each dataset has its own set of unique and overlapping features related to URL string and website contents. We want to investigate if overlapping features are similar in nature across datasets and how does the model perform when trained on one dataset and tested on the other. We conduct practical experiments and leverage explainable AI (XAI) methods such as SHAP plots to provide insights into different features' contributions in case of phishing detection to answer our primary question, "Can features for phishing URL detection be trusted across diverse dataset?". Our case study experiment results show that features for phishing URL detection can often be dataset-dependent and thus may not be trusted across different datasets even though they share same set of feature behaviors.


Can Generative AI Bots Be Trusted?

Communications of the ACM

In November 2022, OpenAI released ChatGPT, a major step forward in creative artificial intelligence. ChatGPT is OpenAI's interface to a "large language model," a new breed of AI based on a neural network trained on billions of words of text. ChatGPT generates natural language responses to queries (prompts) on those texts. In bringing working versions of this technology to the public, ChatGPT has unleashed a huge wave of experimentation and commentary. It has inspired moods of awe, amazement, fear, and perplexity.


Artificial Intelligence Is Now Smart Enough to Know When It Can't Be Trusted

#artificialintelligence

How might The Terminator have played out if Skynet had decided it probably wasn't responsible enough to hold the keys to the entire US nuclear arsenal? As it turns out, scientists may just have saved us from such a future AI-led apocalypse, by creating neural networks that know when they're untrustworthy. These deep learning neural networks are designed to mimic the human brain by weighing up a multitude of factors in balance with each other, spotting patterns in masses of data that humans don't have the capacity to analyse. While Skynet might still be some way off, AI is already making decisions in fields that affect human lives like autonomous driving and medical diagnosis, and that means it's vital that they're as accurate as possible. To help towards this goal, this newly created neural network system can generate its confidence level as well as its predictions.


Artificial Intelligence Is Now Smart Enough to Know When It Can't Be Trusted

#artificialintelligence

How might The Terminator have played out if Skynet had decided it probably wasn't responsible enough to hold the keys to the entire US nuclear arsenal? As it turns out, scientists may just have saved us from such a future AI-led apocalypse, by creating neural networks that know when they're untrustworthy. These deep learning neural networks are designed to mimic the human brain by weighing up a multitude of factors in balance with each other, spotting patterns in masses of data that humans don't have the capacity to analyse. While Skynet might still be some way off, AI is already making decisions in fields that affect human lives like autonomous driving and medical diagnosis, and that means it's vital that they're as accurate as possible. To help towards this goal, this newly created neural network system can generate its confidence level as well as its predictions.


Artificial Intelligence Neural Network Learns When It Should Not Be Trusted

#artificialintelligence

MIT researchers have developed a way for deep learning neural networks to rapidly estimate confidence levels in their output. The advance could enhance safety and efficiency in AI-assisted decision making. A faster way to estimate uncertainty in AI-assisted decision-making could lead to safer outcomes. Increasingly, artificial intelligence systems known as deep learning neural networks are used to inform decisions vital to human health and safety, such as in autonomous driving or medical diagnosis. These networks are good at recognizing patterns in large, complex datasets to aid in decision-making.


Pursuit of AI that Can be Trusted Getting More Attention in Pandemic Era - AI Trends

#artificialintelligence

AI is receiving a push from the race to find a vaccine, diagnostics and effective treatments for the COVID-19 virus, and the push has also heightened awareness of the need to implement AI that is transparent and free of bias--AI that can be trusted. The World Economic Forum is one organization that has responded. With ethics in mind, the organization's AI and Machine Learning team recently announced its Procurement in a Box toolkit with concrete advice for purchasing, risk assessments, proposal drafting and evaluation. To produce the toolkit, the Forum worked over the past year with many organizations, including the United Kingdom's Office for AI in the Department for Digital, Culture, Media & Sport, with Deloitte, Salesforce and Splunk, as well as 15 other countries and more than 150 members of government, academia, civil society and the private sector. The development process incorporated workshops and interviews with government procurement officials and private sector procurement professionals, according to a recent account in Modern Diplomacy.


Thinking of RPA implementation? Read this to know more on how UiPath can assist in your strategy

#artificialintelligence

Are you ready to seize the opportunities that will arise as we move into this automated era? To enhancing your business potential, automation of business & operational processes is one kay factor to look upon. With minimal initial investment, it provides quick organizational benefits. This happens without creating any type of disruption in the underlying systems. There are multiple of traditional solutions that does this approach.