Collaborating Authors


Texas man arrested for allegedly flying drugs, phones into prison yard on drone

FOX News

Fox News Flash top headlines are here. Check out what's clicking on A Texas man was arrested after allegedly flying a drone loaded with drugs, prepaid phones and mp3 players into a Fort Worth prison yard. Bryant LeRay Henderson, 42, was arrested at his home in Smithville, Texas and charged with one count of attempting to provide contraband in prison, one count of serving as an airman without an airman's certificate, and one count of possession with intent to distribute a controlled substance. "Contraband drone deliveries are quickly becoming the bane of prison officials' existence. Illicit goods pose a threat to guards and inmates alike – and when it comes to cell phones, the threat often extends outside prison walls. We are determined to stop this trend in its tracks," said U.S. Attorney Chad Meacham in a press release on Friday.

Last Week in AI #177: OpenAI commercializes DALL-E 2, Sony AI beats human competitors in racing game, Gmail getting smarter searches, and more!


Last week OpenAI moved DALL-E 2, the image generation tool, into Beta (the company hopes to expand its current user base to 1 million) while granting users the "the right to reprint, sell, and merchandise" images they generate with DALL-E. This is useful for users who wish to use the generated images for commercial purposes, like making illustrations for children's books. Other openly available AI image generation models face similar problems. Also, it's not clear if OpenAI violated any IP laws for just training on these Internet images and then commercializing their model. While the UK is exploring allowing commercial use of models trained on public but trademarked data, the U.S. may not follow suit.

Argo AI assembles panel of outside experts to oversee safety of its autonomous vehicles


As autonomous vehicle testing ramps up, Argo AI announced the formation of a panel of outside experts to oversee the safe deployment of its technology. The startup, which is backed by Ford and Volkswagen, will "provide feedback on Argo's safety and security practices and policies, including maintaining a world-class safety culture, scaling safely across multiple cities and countries, and responsibly launching and operating commercial driverless services," the company said. The announcement comes as public opinion seems to be turning on autonomous vehicles (AVs), with recent surveys suggesting nearly half of Americans think that AVs would be a "bad idea" for society. And it comes as the Biden administration continues to scrutinize crashes involving autonomous vehicles as it weighs new regulations for the industry. The Argo Safety Advisory Council is aimed at improving the public's perception of AVs, while also bringing more transparency to the work that goes on behind the scenes.

Autonomous vehicles are a dream for drug smugglers


Level 5 autonomous vehicles (AVs) will allow shipping companies to significantly decrease their costs and their liability as they will no longer need long-haul truck drivers. However, this also applies to smugglers. Trafficking of drugs, people, weapons, and black market goods will all receive a boon from AVs. There will be a massive increase in vehicles on the road with the advent of fully-autonomous technology. Long-haul delivery vehicles could largely become operated with no passengers inside, and will likely not even have a place for a passenger.

Who Is Liable When AI Kills?


Who is responsible when AI harms someone? A California jury may soon have to decide. In December 2019, a person driving a Tesla with an artificial intelligence driving system killed two people in Gardena in an accident. The Tesla driver faces several years in prison. In light of this and other incidents, both the National Highway Transportation Safety Administration (NHTSA) and National Transportation Safety Board are investigating Tesla crashes, and NHTSA has recently broadened its probe to explore how drivers interact with Tesla systems.

Three opportunities of Digital Transformation: AI, IoT and Blockchain


Koomey's law This law posits that the energy efficiency of computation doubles roughly every one-and-a-half years (see Figure 1–7). In other words, the energy necessary for the same amount of computation halves in that time span. To visualize the exponential impact this has, consider the face that a fully charged MacBook Air, when applying the energy efficiency of computation of 1992, would completely drain its battery in a mere 1.5 seconds. According to Koomey's law, the energy requirements for computation in embedded devices is shrinking to the point that harvesting the required energy from ambient sources like solar power and thermal energy should suffice to power the computation necessary in many applications. Metcalfe's law This law has nothing to do with chips, but all to do with connectivity. Formulated by Robert Metcalfe as he invented Ethernet, the law essentially states that the value of a network increases exponentially with regard to the number of its nodes (see Figure 1–8).

Why AI Needs a Social License


If business wants to use AI at scale, adhering to the technical guidelines for responsible AI development isn't enough. It must obtain society's explicit approval to deploy the technology. Six years ago, in March 2016, Microsoft Corporation launched an experimental AI-based chatbot, TayTweets, whose Twitter handle was @TayandYou. Tay, an acronym for "thinking about you," mimicked a 19-year-old American girl online, so the digital giant could showcase the speed at which AI can learn when it interacts with human beings. Living up to its description as "AI with zero chill," Tay started off replying cheekily to Twitter users and turning photographs into memes. Some topics were off limits, though; Microsoft had trained Tay not to comment on societal issues such as Black Lives Matter. Soon enough, a group of Twitter users targeted Tay with a barrage of tweets about controversial issues such as the Holocaust and Gamergate. They goaded the chatbot into replying with racist and sexually charged responses, exploiting its repeat-after-me capability. Realizing that Tay was reacting like IBM's Watson, which started using profanity after perusing the online Urban Dictionary, Microsoft was quick to delete the first inflammatory tweets. Less than 16 hours and more than 100,000 tweets later, the digital giant shut down Tay.

Gatik is bringing its self-driving box trucks to Kansas


Autonomous vehicle startup Gatik says it will start using its self-driving box trucks in Kansas as it expands to more territories. Governor Laura Kelly last week signed a bill that makes it legal for self-driving vehicles to run on public roads under certain circumstances. Following a similar effort in Arkansas, Gatik says it and its partner Walmart worked with legislators and stakeholders to "develop and propose legislation that prioritizes the safe and structured introduction of autonomous vehicles in the state." Before Gatik's trucks hit Kansas roads, the company says it will provide training to first responders and law enforcement. Gatik claims that, since it started commercial operations three years ago, it has maintained a clean safety record in Arkansas, Texas, Louisiana and Ontario, Canada.

Engaging with Disengagement


Disengagement is a situation when the vehicle returns to manual control or the driver feels the need to take back the wheel from the AV decision system. I came across this news article a while ago about a man dozing off at the wheel after switching his Tesla to autonomous mode, and being criminally charged soon after because the vehicle was speeding unbeknownst to him. A quick search revealed several such reports on drivers being charged for unlawful practices in semi-autonomous vehicles. This got me thinking: how will traffic laws change as we slowly enter the autonomous vehicle era, and in general, the AI-driven 21st century? Most importantly, this brings up the question of whom to blame when dealing with adverse human-robot interactions. These aren't new questions – only questions to which new perspectives can continually be added until a final course of action is decided. While I actively try to avoid the philosophical and ethical underpinnings of the matter, I will cover the current progress in autonomous vehicle technology, trends and limitations of today's autonomous vehicle policy, and possible directions to better facilitate the transition to autonomous vehicles around the globe. The last decade or so has been a very exciting time in the self-driving vehicle space.

AI Startups Finally Getting Onboard With AI Ethics And Loving It, Including Those Newbie Autonomous Self-Driving Car Tech Firms Too


AI startups are increasingly embracing AI ethics, though this is trickier than it might seem at ... [ ] first glance. Whatever you are thinking, think bigger. Fake it until you make it. These are the typical startup lines that you hear or see all the time. They have become a kind of advisory lore amongst budding entrepreneurs. If you wander around Silicon Valley, you'll probably see bumper stickers with those slogans and likely witness high-tech founders wearing hoodies emblazoned with such tropes. AI-related startups are assuredly included in the bunch. Perhaps we might though add an additional piece of startup success advice for the AI aiming nascent firms, namely that they should energetically embrace AI ethics. That is a bumper sticker-worthy notion and assuredly a useful piece of sage wisdom for any AI founder that is trying to figure out how they can be a proper leader and a winning entrepreneur. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few. The first impulse of many AI startups is likely the exact opposite of wanting to embrace AI ethics. Often, the focus of an AI startup is primarily about getting some tangible AI system out the door as quickly as possible. There is usually tremendous pressure to produce an MVP (minimally viable product). Investors are skittish about putting money into some newfangled AI contrivance that might not be buildable, and therefore the urgency to craft an AI pilot or prototype is paramount.