Companies are increasingly using algorithms to manage and control individuals not by force, but rather by nudging them into desirable behavior -- in other words, learning from their personalized data and altering their choices in some subtle way. Since the Cambridge Analytica Scandal in 2017, for example, it is widely known that the flood of targeted advertising and highly personalized content on Facebook may not only nudge users into buying more products, but also to coax and manipulate them into voting for particular political parties. University of Chicago economist Richard Thaler and Harvard Law School professor Cass Sunstein popularized the term "nudge" in 2008, but due to recent advances in AI and machine learning, algorithmic nudging is much more powerful than its non-algorithmic counterpart. With so much data about workers' behavioral patterns at their fingertips, companies can now develop personalized strategies for changing individuals' decisions and behaviors at large scale. These algorithms can be adjusted in real-time, making the approach even more effective.
Regulators in Europe and Washington are racing to figure out how to govern business' use of artificial intelligence while companies push to deploy the technology. Driving the news: On Wednesday, the EU revealed a detailed proposal on how AI should be regulated, banning some uses outright and defining which uses of AI are deemed "high-risk." In the U.S., the federal government has yet to pass legislation specifically addressing AI, though some local governments have enacted their own rules, especially around facial recognition. Acting FTC chairwoman Rebecca Slaughter told Axios: "I am pleased that the European Commission shares the FTC's concerns about the risks posed by artificial intelligence... I look forward to reviewing the EC's proposal as we learn from each other in pursuit of transparency, fairness, and accountability in algorithmic decision making."
In 1969, as revolutionary fires burned, the Academy gave its Best Picture award to "Oliver!" Hollywood, still ruled by the crumbling studio system, was almost willfully blind to the nineteen-sixties; even breakthrough films such as "2001: A Space Odyssey" and "Rosemary's Baby" were left off the Best Picture list, which included representatives of such superannuated genres as the big-budget musical ("Funny Girl") and the medieval costume drama ("The Lion in Winter"). Under the newly devised rating system, "Oliver!" became the first G-rated film to win Best Picture, and it remains the last. By the next year, movies like "Midnight Cowboy" and "Easy Rider" finally injected the ceremony with a dose of sixties counterculture--but the decade was over. Two of this year's eight Best Picture nominees are set largely in 1969, and they show what Hollywood wouldn't bring itself to see back then. "The Trial of the Chicago 7" dramatizes the politicized court proceedings against activists who, the year before, protested the Democratic National Convention in Chicago.
A narrowly-avoided collision between Elon Musk's SpaceX and OneWeb satellites that was widely reported last week did not take place, according to filings provided to the FCC by SpaceX. It was reported that SpaceX's satellite came within 60 meters of a OneWeb craft, but SpaceX claims that the actual miss distance was over 1,000 meters, which was "neither a'close call' [nor] 'urgent'". OneWeb's satellites operate at a 1,200 kilometer altitude, compared to SpaceX's 550 kilometers, meaning they must pass through Mr Musk's network as they ascend. OneWeb apparently contacted the SpaceX team, who disabled the Starlink satellite's collision avoidance system to allow OneWeb to pass through, according to OneWeb's government affairs chief Chris McLaughlin. However, SpaceX claims this is not the case, stating in FCC filings authored by the company's Director of Satellite Policy David Goldman, that "the probability of collision never exceeded the threshold for a maneuver, and the satellites would not have collided even if no maneuver had been conducted".
Originally published at Ross Dawson. Shortly after the new year 2021, the Media Synthesis community at Reddit began to become more than usually psychedelic. The board became saturated with unearthly images depicting rivers of blood, Picasso's King Kong, a Pikachu chasing Mark Zuckerberg, Synthwave witches, acid-induced kittens, an inter-dimensional portal, the industrial revolution and the possible child of Barack Obama and Donald Trump. The bizarre images were generated by inputting short phrases into Google Colab notebooks (web pages from which a user can access the formidable machine learning resources of the search giant), and letting the trained algorithms compute possible images based on that text. In most cases, the optimal results were obtained in minutes. Various attempts at the same phrase would usually produce wildly different results. In the image synthesis field, this free-ranging facility of invention is something new; not just a bridge between the text and image domains, but an early look at comprehensive AI-driven image generation systems that don't need hyper-specific training in very limited domains (i.e. NVIDIA's landscape generation framework GauGAN [on which, more later], which can turn sketches into landscapes, but only into landscapes; or the various sketch face Pix2Pix projects, that are likewise'specialized'). Example images generated with the Big Sleep Colab notebook .
"Breaking Bots" by Clinc's Founder CEO Jason Mars is released with ForbesBooks. This release is posted on behalf of ForbesBooks (operated by Advantage Media Group under license.) NEW YORK (March 16, 2021) -- Breaking Bots: Inventing a New Voice in the AI Revolution by Clinc's Founder CEO Dr. Jason Mars is available now. The book is published with ForbesBooks, the exclusive business book publishing imprint of Forbes. In setting the stage for his new book, Jason Mars considers how technology has shaped the arc of human history, time and again.
The US Federal Trade Commission has warned companies against using biased artificial intelligence, saying they may break consumer protection laws. A new blog post notes that AI tools can reflect "troubling" racial and gender biases. If those tools are applied in areas like housing or employment, falsely advertised as unbiased, or trained on data that is gathered deceptively, the agency says it could intervene. "In a rush to embrace new technology, be careful not to overpromise what your algorithm can deliver," writes FTC attorney Elisa Jillson -- particularly when promising decisions that don't reflect racial or gender bias. "The result may be deception, discrimination -- and an FTC law enforcement action."
You're probably reading this on a browser built by Apple or Google. If you're on a smartphone, it's almost certain those two companies built the operating system. You probably arrived from a link posted on Apple News, Google News or a social media site like Facebook. And when this page loaded, it, like many others on the Internet, connected to one of Amazon's ubiquitous data centers. Amazon, Apple, Facebook and Google -- known as the Big 4 -- now dominate many facets of our lives. But they didn't get there alone. They acquired hundreds of companies over decades to propel them to become some of the most powerful tech behemoths in the world.
Artificial intelligence companies are developing audio transcription tools that can create searchable archives of calls and meetings, WIRED reported April 15. Artificial intelligence companies have greatly improved their automated audio transcription in recent years, and the technology is now able to produce transcripts with impressive accuracy, according to WIRED. One example is Stedi, a company that makes business-to-business software. It developed a tool called Rewatch that records meetings and uses voice-dictation AI to transcribe it, providing employees with a searchable record of everything said during the meeting. AI companies Otter.ai and Trint also offer voice-dictation to produce meeting transcripts, and Zoom has built-in wares that offer meeting notes.
Teamwork isn't just a human characteristic: Colonies of army ants will form living'scaffolding' to protect members from falling. The insects are blind and have no designated leader but, according to new research, they're able to use simple behavioral rules to develop these safety structures without the need for direct communication. Once a scaffold was built, worker ants were almost 100 percent protected from falling off steep inclines. Understanding how they design such complex structures could help engineers development self-healing materials and swarm robotics, researchers said. Army ants in Central American rainforests will build scaffolds out of their body to help them traverse steep terrain.