Goto

Collaborating Authors

Information Technology


AI ethics: How Salesforce is helping developers build products with ethical use and privacy in mind

ZDNet

People have long debated what constitutes the ethical use of technology. But with the rise of artificial intelligence, the discussion has intensified as it's now algorithms not humans that are making decisions about how technology is applied. In June 2020, I had a chance to speak with Paula Goldman, Chief Ethical and Humane Use Officer for Salesforce about how companies can develop technology, specifically AI, with ethical use and privacy in mind. I spoke with Goldman during Salesforce's TrailheaDX 2020 virtual developer conference, but we didn't have a chance to air the interview then. I'm glad to bring it to you now, as the discussion about ethics and technology has only intensified as companies and governments around the world use new technologies to address the COVID-19 pandemic. The following is a transcript of the interview, edited for readability. Bill Detwiler: So let's get right to it.


China five-year plan aims for supremacy in AI, quantum computing

Engadget

China's tech industry has been hit hard by US trade battles and the economic uncertainties of the pandemic, but it's eager to bounce back in the relatively near future. According to the Wall Street Journal, the country used its annual party meeting to outline a five-year plan for advancing technology that aids "national security and overall development." It will create labs, foster educational programs and otherwise boost research in fields like AI, biotech, semiconductors and quantum computing. The Chinese government added that it would increase spending on basic research (that is, studies of potential breakthroughs) by 10.6 percent in 2021, and would create a 10-year research strategy. China has a number of technological advantages, such as its 5G availability and the sheer volume of AI research it produces.


How to Build Machine Learning Model using SQL

#artificialintelligence

A label is a variable to be predicted. In this example, I will predict whether the website visitor will make any transactions and I gave this label the name "purchase". This can be derived from the existing variable "totals.transactions". For simplicity, let's make this prediction a black or white situation, either "purchase" or "no purchase". Since the model training cannot handle string value as the output result, therefore it is necessary to code them into numbers.


Decision Intelligence: Expanding the Horizon of Business Intelligence

#artificialintelligence

The volume of data businesses produce today carries much significance in terms of overall growth. Foresighted companies know that if they want to vie in a highly-competitive market, they must deploy advanced analytics to ever-growing data sets. Using business intelligence allows them to look into their historical and current data sets, and it provides them predictive views of their business operations. Augmented by artificial intelligence and machine learning, business intelligence provides enterprises with decision-making context and recommendations. This significantly drives a move towards decision intelligence, the creative blend of technology into enterprise decision-making strategies and workflows.


AI can help Google and Amazon detect unconscious bias

#artificialintelligence

The reason for focusing on this area of bias is regarded as important by some enterprises. This is because unconscious bias is often easy to miss. Moreover, unconscious bias is often seen to be far more pervasive in the workplace than blatant discrimination. According to some researchers, unconscious bias can be blamed for lower wages, less opportunities for advancement and high turnover. Unconscious biases are types of social stereotypes held by members of one group about other groups of people.


Here's an adorable factory game about machine learning and cats

#artificialintelligence

Machine learning is perhaps old hat by now, but what's never going to be old hat is cats. People just can't seem to get enough of them. Learning Factory is an Early Access game that just released last month about building an automated factory that produces the things cats want to buy, then sells them. Your job is to keep the shelves stocked and the cats happy—and earn money by selling at optimal prices. By making offers to cats your factory can train up machine learning models that will then automatically adjust market prices to account for trends and the wallets of the cats in question. Rich cats want fancy expensive cat towers and food, while normal cats just want a good deal on a ball of yarn and construction worker cats want raw materials. It's a near concept that bears out pretty well in action: Do you want to try to make a huge, all-inclusive single machine learning model or instead focus on specific models tailored to each customer type?


Cryptology ePrint Archive: Report 2021/287 - A Deeper Look at Machine Learning-Based Cryptanalysis

#artificialintelligence

In this article, we propose a detailed analysis and thorough explanations of the inherent workings of this new neural distinguisher. First, we studied the classified sets and tried to find some patterns that could guide us to better understand Gohr's results. We show with experiments that the neural distinguisher generally relies on the differential distribution on the ciphertext pairs, but also on the differential distribution in penultimate and antepenultimate rounds. In order to validate our findings, we construct a distinguisher for speck cipher based on pure cryptanalysis, without using any neural network, that achieves basically the same accuracy as Gohr's neural distinguisher and with the same efficiency (therefore improving over previous non-neural based distinguishers).


An AI Was Taught to Play the World's Hardest Video Game and Still Couldn't Set a New Record

#artificialintelligence

What's the hardest video game you've ever played? If it wasn't QWOP then let me tell you right know that you don't know how truly difficult a game can be. The deceptively simple running game is so challenging to master that even an AI trained using machine learning still only mustered a top 10 score instead of shattering the record. If you've never played QWOP before, you owe it to yourself to give it a try and see if you can even get your sprinter off the starting line. Developed by Bennett Foddy back in 2008, QWOP was inspired by an '80s arcade game called Track & Field that requires players to mindlessly mashing buttons to win a race.


New Machine Learning Theory Raises Questions About the Very Nature of Science

#artificialintelligence

A novel computer algorithm, or set of rules, that accurately predicts the orbits of planets in the solar system could be adapted to better predict and control the behavior of the plasma that fuels fusion facilities designed to harvest on Earth the fusion energy that powers the sun and stars. The algorithm, devised by a scientist at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL), applies machine learning, the form of artificial intelligence (AI) that learns from experience, to develop the predictions. "Usually in physics, you make observations, create a theory based on those observations, and then use that theory to predict new observations," said PPPL physicist Hong Qin, author of a paper detailing the concept in Scientific Reports. "What I'm doing is replacing this process with a type of black box that can produce accurate predictions without using a traditional theory or law." Qin (pronounced Chin) created a computer program into which he fed data from past observations of the orbits of Mercury, Venus, Earth, Mars, Jupiter, and the dwarf planet Ceres.


Training Pattern

#artificialintelligence

With supervised training, the desired inputs and outputs are provided by the trainer. The network then classifies the inputs and compares the resultant outputs against the benchmark outputs. Any errors are back-propagated throughout the system, which forces the network to adjust the various parameter weights. This continuous tweaking process repeats over and over, giving the "deep learning" name to the network.