Collaborating Authors


Bernie Sanders calls out potential 2024 presidential rival Buttigieg over flight cancellations, delays

FOX News

Sen. Bernie Sanders, I-Vt., called for action from Transportation Secretary Pete Buttigieg, telling his supporters to urge the Biden administration to "take action to reduce flight cancellations and delays" in America. In an email sent from Sanders to supporters on Friday, the senator targeted Buttigieg over the crisis Americans are facing amid Independence Day as it relates to flight delays and cancellations, as well as the high prices associated with tickets, checked bags, and fees. As rumors circulate around a potential presidential run from Sanders in 2024, his remarks regarding action that he believes should be taken from Buttigieg, who is also rumored to be a Democratic presidential hopeful, were accompanied by a petition urging swift action on the issue from the administration. This is not the first time Sanders has spoken out about the problems or offered criticism of Buttigieg. Earlier this week, Sanders sent a letter to Buttigieg urging "immediate action to substantially reduce" the problems Americans are being forced to undergo with air travel.

Hey C-Suite: AI Won't Save You!


This article is a collaboration with David Gossett, Principal with Infornautics, who builds first mover technologies that have no instruction set and need to be invented from scratch. He believes data has a story to tell if we apply the right machine models. His specialty is unstructured data. This article is intended to be provocative, to summon curiosity into the issues that plague us today when it comes to machine learning. Three years ago, I wrote this article, Artificial Intelligence Needs to Reset. The AI Hype that was supposed to transpire into all-things automated is still far off. Since that time, we've experienced speed bumps that have pointed to issues including lack of model accountability (black boxes), bias, lack of data representation in the training set etc. An AI Ethics movement emerged to demand more responsible tech, increased model transparency and verifiable models that do what they're supposed to do without impairment or harm to individuals or groups, in the process. Our future is Artificial Intelligence. It's been conjectured that this wonderful AI will be our savior.

How visual-based AI is evolving across industries


Artificial Intelligence is transforming the business world as a whole with all its applications and potential, with visual-based AI being capable of digital images and videos. Visual-based AI, which refers to computer vision, is an application of AI that is playing a significant role in enabling a digital transformation by enabling machines to detect and recognize not just images and videos, but also the various elements within them, such as people, objects, animals and even sentiments, emotional and other parameters-based capabilities to name a few. Artificial intelligence is now further evolving across various industries and sectors. Transport: Computer vision aids in a better experience for transport, as video analytics combined with Automatic number plate recognition can help in tracking and tracing violators of traffic safety laws (speed limits and lane violation etc.) and stolen or lost cars, as well as in toll management and traffic monitoring and controlling. Aviation: Visual AI can help in providing prompt assistance for elderly passengers and for those requiring assistance (physically challenged, pregnant women etc.); it can also be useful in creating a new "face-as-a-ticket" option for easy and fast boarding for passengers, in tracking down lost baggage around the airport as well as in security surveillance on passengers and suspicious objects (track and trace objects and passengers relevant to it).

AI Ethics in Action: Making the Black Box Transparent - DATAVERSITY


In my third article about the ethics of artificial intelligence (AI), I look at operationalizing AI ethics. Human intelligence remains a key factor – to keep a watchful eye on potential biases. Amazon caused a stir in late 2018 with media reports that it had abandoned an AI-powered recruitment tool because it was biased against women. Conceived as a piece of in-house software that could sift through hundreds of CVs at lightspeed and accurately identify the best candidates for any open position, the application had acquired one bad habit: It had come to favor men over women for software developer jobs and other technical roles. It had learned from past data that more men applied for and held these positions, and it now misread male dominance in tech as a reflection of their superiority, not social imbalances.

How Gather AI's Automated Inventory Management System Helps Businesses


Pittsburgh, PA--March 29, 2022: Gather AI, the first truly automated inventory management system that brings near-real-time visibility to warehousing operations, has positively impacted many customers. Small-size physical stores all the way to multinational corporations like Walmart and Amazon depend on reliable and accurate inventory management software systems, as these are needed everywhere, and with even more challenging tasks at large-scale retailers, such as the warehouses from the biggest retailers in the Fortune 500. But, even with inventory management software, large organizations still rely on people on forklifts with barcode readers to perform cycle counts, from a significant amount of employees to costly machinery to properly manage large-scale inventories, such as those found in retail, third-party logistics, food distribution, and warehouses in air cargo industries. Most importantly, the visibility of what's sitting on the DC floor is delayed by 3-4 months. To solve this significant problem, Gather AI is building the world's first truly autonomous inventory management platform, freeing logistic-driven organizations from inefficient and manual tasks through intelligent and robust automation.

Artificial Intelligence and Ethics


Artificial intelligence (AI) means a smart computer system like humans to solve complex problems. Machine learning (ML) is allowing machines to learn from available data so that they can give a precise output. AI is used almost everywhere in facial recognition, medical, online gaming, sports, automobile, insurance, airways, defence, government and private companies. Decision making on various attributes of our daily lives is being outsourced to artificial intelligence and machine-learning algorithms as they are motivated by speed and efficiency in the decision-making process. However, slowly, public and government are beginning to realise the dangers and complexity of programmes developed using AI and ML and the need for proper checks and balances in the way such programmes are used and developed. Therefore, it is important to take fairness into consideration while consuming the output from these programmes.

Protecting The Human: Ethics In AI


When we think about the future of our world and what exactly that looks like, it's easy to focus on the shiny objects and technology that make our lives easier: flying cars, 3D printers, digital currencies and automated everything. In the opening scene of the animated film WALL-E – which takes place in the year 2805 – a song from "Hello, Dolly!" happily plays in the background, starkly contrasting the glimpse we get of our future planet Earth: an abandoned wasteland with heaping piles of trash around every corner. Humans had all evacuated Earth by this point and were living in a spaceship, where futuristic technology and automation left them overweight, lazy and completely oblivious to their surroundings. Machines do everything for them, from the hoverchairs that carry them around, to the robots that prepare their food. Glued to their screens all day, which have taken control of their lives and decisions, humans exhibit lazy behaviors like video chatting the person physically next to them.

Thinking outside of the AI Black box.


These same abilities humans are now trying to emulate with machines, and they are in fact the core components of Artificial Intelligence (AI), one of the most important technical developments of our era. This technology is transforming knowledge, work, governance and the core of our daily lives, and as the sophistication of these systems increases, especially with the advent of Deep Neural Networks (DNNs), I would argue the human understanding of these systems is decreasing. A need is rising to bring to this field the HCI (Human Computer Interaction) human centered design approach, and within this paper I will suggest the possibilities how art and creative thought together with HCI expertise, can help broaden the current spectrum of AI, it's accessibility and possibly be a joint venture to imagine what AI could become. As humans started developing their ability of self-introspection around 40 thousand years ago, they have used art to communicate, evoke emotions, recall past events and communicate. These cognitive abilities have helped humans survive and evolve as a species, putting into use tools of memory, language, understanding, reasoning, learning, pattern recognition and expression.

Trust in AI: Interpretability is not necessary or sufficient, while black-box interaction is necessary and sufficient Artificial Intelligence

The problem of human trust in artificial intelligence is one of the most fundamental problems in applied machine learning. Our processes for evaluating AI trustworthiness have substantial ramifications for ML's impact on science, health, and humanity, yet confusion surrounds foundational concepts. What does it mean to trust an AI, and how do humans assess AI trustworthiness? What are the mechanisms for building trustworthy AI? And what is the role of interpretable ML in trust? Here, we draw from statistical learning theory and sociological lenses on human-automation trust to motivate an AI-as-tool framework, which distinguishes human-AI trust from human-AI-human trust. Evaluating an AI's contractual trustworthiness involves predicting future model behavior using behavior certificates (BCs) that aggregate behavioral evidence from diverse sources including empirical out-of-distribution and out-of-task evaluation and theoretical proofs linking model architecture to behavior. We clarify the role of interpretability in trust with a ladder of model access. Interpretability (level 3) is not necessary or even sufficient for trust, while the ability to run a black-box model at-will (level 2) is necessary and sufficient. While interpretability can offer benefits for trust, it can also incur costs. We clarify ways interpretability can contribute to trust, while questioning the perceived centrality of interpretability to trust in popular discourse. How can we empower people with tools to evaluate trust? Instead of trying to understand how a model works, we argue for understanding how a model behaves. Instead of opening up black boxes, we should create more behavior certificates that are more correct, relevant, and understandable. We discuss how to build trusted and trustworthy AI responsibly.

How 'digital twin' AI will transform sustainable farming


This story is part of Fix's What's Next Issue,which looks ahead to the ideas and innovations that will shape the climate conversation in 2022, and asks what it means to have hope now. Check out the full issue here. Imagine you're standing at the edge of a soybean field in Iowa. In the distance, a combine harvester guided by GPS rolls across a field that has been leveled with the aid of a laser, as the farmer at the wheel monitors weather data on her phone. These tools, part of an approach to agronomy called precision agriculture, promise to increase yields and reduce costs by maximizing efficiency.