Goto

Collaborating Authors

 techbeacon


5 AI Articles We Almost Forgot We Love

#artificialintelligence

Oops! Valentine's Day came and went. In this belated Valentine's Day post, TechBeacon presents five unforgettable articles on AI that we love.


5 great ways to use AI in your test automation

#artificialintelligence

Don't get tripped up by thinking of the wrong kind of artificial intelligence (AI) when it comes to testing scenarios. In fact, this second type of AI is already being used in some testing scenarios. But before looking at automation-testing examples affected by machine learning, you need to define what machine learning (ML) actually is. At its core, ML is a pattern-recognition technology--it uses patterns identified by your machine learning algorithms to predict future trends. ML can consume tons of complex information and find patterns that are predictive, and then alert you to those differences.


What you need to know about China's AI ethics rules

#artificialintelligence

Late last year, China's Ministry of Science and Technology issued guidelines on artificial intelligence ethics. The rules stress user rights and data control while aligning with Beijing's goal of reining in big tech. China is now trailblazing the regulation of AI technologies, and the rest of the world needs to pay attention to what it's doing and why. The European Union had issued a preliminary draft of AI-related rules in April 2021, but we've seen nothing final. In the United States, the notion of ethical AI has gotten some traction, but there aren't any overarching regulations or universally accepted best practices.


Why most machine learning projects stumble

#artificialintelligence

Despite widespread interest in machine learning (ML), relatively few projects leave the proof-of-concept phase and enter production. In fact, in a 2020 report, Capgemini found that roughly 85% of all ML projects grind to a halt across Capgemini's client organizations--despite successful preliminary models and ample support from executive leaders. Further, the study found, only half of the world's leading AI-powered enterprises successfully roll out artificial intelligence projects, including ML models, and this number drops substantially among organizations without dedicated ML teams. In recent years, AI solutions have attracted the interest of executive leadership across industries. Machine-learning models, perhaps the leading subset of AI, have particularly interested enterprises racing to digitize in the modern market because of their ability to automatically "learn" and update.


Rise of the machines: The coming AI/testing singularity

#artificialintelligence

Artificial intelligence (AI) is the next exponential technology trend, and it's knocking on your front door. In fact, many organizations have initiatives well under way, according to the World Quality Report. One branch, "artificial general intelligence," is the effort to make machines that are conscious, like humans, where machines can reflect on their own existence. The other is "narrow AI"(also known as machine learning). Machine learning focuses on computer algorithms that can be trained with data and that mimic human thinking--without actually thinking.


Relief is coming for your security team: 6 ways AI is a game-changer

#artificialintelligence

Artificial intelligence (AI) and machine learning (ML) give security teams the ability to catch bad guys with the power of math. Through the use of effective analytical methods, organizations can become more cyber resilient. With statistical learning; supervised, semi-supervised, and unsupervised ML; advanced visualizations; and other principled approaches tailored for cybersecurity, you will be one step ahead of the game. Here are six ways AI and ML, along with analytics, can boost your company's cyber resilience. AI and ML can remove friction in managing identities through adaptive authentication, which dynamically escalates the factors needed to verify an identity based on risk.


Why you should prioritize governance of ML and AI

#artificialintelligence

This is a truly momentous time for machine learning in the enterprise, with investments soaring and a growing number of use cases that can create tangible business value. Organizations are still struggling with important phases of the AI/ML lifecycle. One particular challenge stands out: governance. A lack of robust governance doesn't just limit the potential success of your AI/ML initiative; it could put your entire business in peril as well. That was one of our major findings in Algorithmia's "2021 Enterprise Trends in Machine Learning" report.


Testing for bias in your AI software: Why it's needed, how to do it

#artificialintelligence

When it comes to artificial intelligence (AI) and machine learning (ML) in testing, much of the interest and innovation today revolves around the concept of using these technologies to improve and accelerate the practice of testing. The more interesting problem lies in how you should go about testing the AI/ML applications themselves. In particular, how can you tell whether or not a response is correct? Part of the answer involves new ways to look at functional testing, but testers face an even bigger problem: cognitive bias, the possibility that an application returns an incorrect or non-optimal result because of systematic inflection in processing that produces results that are inconsistent with reality. This is very different from a bug, which you can define as an identifiable and measurable error in a process or result.


AI gives SOCs analytical prowess: 3 ways it can boost your resilience

#artificialintelligence

As IT environments become more dynamic, hybrid, and complex, it's becoming increasingly difficult for security operations center (SOC) teams to quickly detect and address critical threats with traditional tools. SOC staff must process and analyze a massive--and growing--amount of data, as they face ever more sophisticated cyber attacks. To respond effectively, SOC leaders can't keep adding rules-based tools to their already large and often unwieldy security stack. Instead, they need AI technology that analyzes data at scale and in real time and that uses machine learning to spots any anomalies that could signal a breach. That way, SOC teams detect unknown, fast-evolving threats missed by rules-based products configured to spot known attacks.


Adversarial machine learning: 5 recommendations for app sec teams

#artificialintelligence

In 2016, Microsoft released a prototype chatbot on Twitter. The automated program, dubbed Tay, responded to tweets and incorporated the content of those tweets into its knowledge base, riffing off the topics to carry on conversations. In less than 24 hours, Microsoft had to yank the program and issue an apology after the software started spewing vile comments, including "I f**king hate feminists" and tweeting that it agreed with Hitler. Online attackers had used crafted comments to pollute the machine-learning algorithm, exploited a specific vulnerability in the program, and recognized that the bot frequently would just repeat comments, a major design flaw. Microsoft apologized, and Tay has not returned.