Goto

Collaborating Authors

responsibility


Military AI speeds up human decision-making - Military Embedded Systems

#artificialintelligence

Imagining military artificial intelligence (AI) applications can make one dream up scenarios like those in the Terminator films, but in reality, AI solutions for defense are much more mundane and focused on improving decision-making for humans, whether they're aircraft maintenance personnel; pilots; or intelligence, surveillance, and reconnaissance (ISR) analysts, says John Canipe, Director of Business Development, Air Force, at SparkCognition Government Systems, during a conversation we had at his company headquarters in Austin, Texas. We also discussed the difference between AI and machine learning (ML), how AI is being applied across multiple military domains, and more. MCHALE: Please provide a brief description of your responsibility within SparkCognition Government Systems and your group's role within the company. CANIPE: As Director of Business Development, Air Force, my current responsibilities are product development, capture management, price/licensing of products, and generating new and recurring sales. MCHALE: We often see AI/ML [artificial intelligence/machine learning] in the same sentence, or used to describe the same thing, but what is the actual difference between AI and ML? CANIPE: Differentiating AI from ML is a struggle everyone is having right now.


Data Careers -- Explained

#artificialintelligence

Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. I recently applied for jobs in the Data Science space, and while the titles and descriptions were different, the skillsets and responsibilities were the same.


Engineer: Failing To See His AI Program as a Person Is "Bigotry"

#artificialintelligence

Earlier this month, just in time for the release of Robert J. Marks's book Non-Computable You, the story broke that, after investigation, Google dismissed a software engineer's claim that the LaMDA AI chatbot really talked to him. Engineer Blake Lemoine, currently on leave, is now accusing Google of "bigotry" against the program. He has also accused Wired of misrepresenting the story. Wired reported that he had found an attorney for LaMDA but he claims that LaMDA itself asked him to find an attorney. I think every person is entitled to representation.


Team Leader - Deep Learning

#artificialintelligence

You are passionate about AI and Deep Learning and want to apply this technology to solve real life problems. Your solid scientific background in these topics give you the required know-how and possibility to (a) keep up with rapidly moving state-of-the-art, (b) quickly prune scientific literature and (c) select promising methodologies for your problem at hand. You are business oriented and pragmatic and understand that in a business context solution must be conceived, build and tested within a limited time frame. You are hands-on and fluent with modern Deep Learning tools. You have a programming background in Python, Matlab and C/C .


Building Transparency Into AI Projects - AI Summary

#artificialintelligence

That means communicating why an AI solution was chosen, how it was designed and developed, on what grounds it was deployed, how it's monitored and updated, and the conditions under which it may be retired. There are four specific effects of building in transparency: 1) it decreases the risk of error and misuse, 2) it distributes responsibility, 3) it enables internal and external oversight, and 4) it expresses respect for people. In 2018, one of the largest tech companies in the world premiered an AI that called restaurants and impersonated a human to make reservations. To "prove" it was human, the company trained the AI to insert "umms" and "ahhs" into its request: for instance, "When would I like the reservation? If the product team doesn't explain how to properly handle the outputs of the model, introducing AI can be counterproductive in high-stakes situations. In designing the model, the data scientists reasonably thought that erroneously marking an x-ray as negative when in fact, the x-ray does show a cancerous tumor can have very dangerous consequences and so they set a low tolerance for false negatives and, thus, a high tolerance for false positives. Had they been properly informed -- had the design decision been made transparent to the end-user -- the radiologists may have thought, I really don't see anything here and I know the AI is overly sensitive, so I'm going to move on. By being transparent from start to finish, genuine accountability can be distributed among all as they are given the knowledge they need to make responsible decisions. Consider, for instance, a financial advisor who hides the existence of some investment opportunities and emphasizes the potential upsides of others because he gets a larger commission when he sells the latter. The more general point is that AI can undermine people's autonomy -- their ability to see the options available to them and to choose among them without undue influence or manipulation. That means communicating why an AI solution was chosen, how it was designed and developed, on what grounds it was deployed, how it's monitored and updated, and the conditions under which it may be retired. There are four specific effects of building in transparency: 1) it decreases the risk of error and misuse, 2) it distributes responsibility, 3) it enables internal and external oversight, and 4) it expresses respect for people. In 2018, one of the largest tech companies in the world premiered an AI that called restaurants and impersonated a human to make reservations. To "prove" it was human, the company trained the AI to insert "umms" and "ahhs" into its request: for instance, "When would I like the reservation?


No, the Google AI isn't sentient, but it likely is racist and sexist

Mashable

While a sentient AI is a thoroughly freaky concept, it's not (yet) a reality. In a recent interview with Wired, engineer and mystic Christian priest Blake Lemoine discussed why he believes that Google's large language model named LaMDA has become sentient, complete with a soul. While that claim has been refuted by many in the artificial intelligence community and has resulted in Lemoine being placed on paid administrative leave by Google, Lemoine also explained how he began working on LaMDA. His journey with the AI started with a much more real-world problem: examining the model for harmful biases in relation to sexual orientation, gender, identity, ethnicity, and religion. "I do not believe there exists such a thing as an unbiased system," said Lemoine to Wired.


Senior Data Scientist, Credit

#artificialintelligence

At Caribou, we're on a mission to help people save money and take control of their car payments. Caribou does this by using technology to unlock low rates, and people to make the process easy and enjoyable. We offer a fully online application and a dedicated team to walk you through the process. We put Drivers in control. Caribou is a hyper-growth company built by leaders from the technology, automotive, and finance industries.


Co-creating the Metaverse – Immersion, Responsibility, and Humanized AI

#artificialintelligence

Advances in machine learning, computer vision, or autonomous processes will open a vast array of opportunities to organizations and employees. This will force us to rethink many aspects of our life and work. For example, virtual personal assistants can understand text, context and tone of voice, converse in natural language, make human-like gestures and even support decision making. The algorithms provide supervised and unsupervised learning capabilities, can be programmed in virtually any language and can be deployed at scale in any location. Using historical data, they create unique AI models that are perfectly fitting specific business and life environments.


7 Steps To More Ethical Artificial Intelligence

#artificialintelligence

AI-generated output can't be explained. This is all true, and is happening today, and there's a risk of these issues accelerating as AI adoption grows. Before the lawsuits start flowing and government regulators start cracking down, organizations using AI need to become more proactive and formulate actionable AI ethics policies. But an effective AI ethics policy requires more than some feel-good statements. It requires actions, built into an AI ethics-aware culture.


This AI attorney says companies need a chief AI officer -- pronto

#artificialintelligence

When Bradford Newman began advocating for more artificial intelligence expertise in the C-suite in 2015, "people were laughing at me," he said. Newman, who leads global law firm Baker McKenzie's machine learning and AI practice in its Palo Alto office, added that when he mentioned the need for companies to appoint a chief AI officer, people typically responded, "What's that?" But as the use of artificial intelligence proliferates across the enterprise, and as issues around AI ethics, bias, risk, regulation and legislation currently swirl throughout the business landscape, the importance of appointing a chief AI officer is clearer than ever, he said. This recognition led to a new Baker McKenzie report, released in March, called "Risky Business: Identifying Blind Spots in Corporate Oversight of Artificial Intelligence." The report surveyed 500 US-based, C-level executives who self-identified as part of the decision-making team responsible for their organization's adoption, use and management of AI-enabled tools. In a press release upon the survey's release, Newman said: "Given the increase in state legislation and regulatory enforcement, companies need to step up their game when it comes to AI oversight and governance to ensure their AI is ethical and protect themselves from liability by managing their exposure to risk accordingly."