A pioneer in machine learning has argued that the technology is best placed to augment human intelligence and bemoaned'confusion' over the meaning of artificial intelligence (AI). Michael I. Jordan, a professor in the department of electrical engineering and computer science, and department of statistics, at the University of California, Berkeley, told the IEEE that while science-fiction discussions around AI were'fun', they were also a'distraction.' "There's not been enough focus on the real problem, which is building planetary-scale machine learning-based systems that actually work, deliver value to humans, and do not amplify inequities," said Jordan, in an article from IEEE Spectrum author Kathy Pretz. Jordan, whose awards include the IEEE John von Neumann Medal, awarded last year for his contributions to machine learning and data science, wrote an article entitled'Artificial Intelligence: The Revolution Hasn't Happened Yet', first published in July 2019 but last updated at the start of this year. With various contributors thanked at the foot of the article – including one Jeff Bezos – Jordan outlined the rationale for caution.
We bandy about the term "artificial intelligence," evoking ideas of creative machines anticipating our every whim, though the reality is more banal: "For the foreseeable future, computers will not be able to match humans in their ability to reason abstractly about real-world situations." This is from Michael I. Jordan, one of the foremost authorities on AI and machine learning, who wants us to get real about AI. "People are getting confused about the meaning of AI in discussions of technology trends--that there is some kind of intelligent thought in computers that is responsible for the progress and which is competing with humans. We don't have that, but people are talking as if we do," he noted in the IEEE Spectrum article. Instead, he wrote in an article for Harvard Data Science Review, we should be talking about ML and its possibilities to augment, not replace, human cognition. Jordan calls this "Intelligence Augmentation," and uses examples like search engines to showcase the possibilities for assisting humans with creative thought.
I work on HR systems, so I deal with HR a lot and hear a lot of stories. They hate dealing with employees as much as employees hate dealing with them. There are a lot more behind the scenes work they'd rather be doing. The only people I hear bitching about HR in the workplace are 1) the ones that HR has to handhold through forms (government or benefits vendor required, forms aren't for fun, ever) or 2) constantly causing issues in the workplace (and it's never the employees fault - "everyone else is the issue"). I'm sure I'll get downvoted because there are a lot of those #2s on Reddit.
The U.S. Army endured a chilly reception when it commenced a streaming effort around its esports team. Jordan Uhl, an activist who handles Twitch streaming for the progressive advocacy group MoveOn, made headlines by getting banned from the Army's Twitch channel when he posted a question in the chat asking viewers about their favorite U.S. war crime. The Army's Twitch channel was then accused of violating Uhl's First Amendment right to political speech, while they in turn claimed Uhl was breaking Twitch Community Guidelines by harassing them. Afterward, Ocasio-Cortez called for banning the military from recruiting on Twitch. Her proposal was voted down with bipartisan support.
The combination of human and machine learning, wherever they complement one another, has a lot of potential applications in citizen science. Several projects have already integrated both forms of learning to perform data-centred tasks (Willi et al. 2019; Sullivan et al. 2018). While the term artificial intelligence (AI) is generally used to refer to any kind of machine or algorithm able to observe the environment, learn, and make decisions, the term machine learning (ML) has been defined'as a subfield of artificial intelligence that includes software able to recognize patterns, make predictions, and apply newly discovered patterns to situations that were not included or covered by their initial design' (Popenici and Kerr 2017, p. 2). ML algorithms are currently the most widely used and applied, for example, in image and speech recognition, fraud detection, and reproducing human abilities in playing Go or driving cars. In scientific research, they find many applications in different fields such as biology, astronomy, and social sciences, just to mention a few (Jordan and Mitchell 2015).
"With Astra we wanted to make a controller that was thinking about the whole map," said Jordan "Riot Wrekz" Anton, a designer at Riot Games. "Her global presence was there right from the beginning. From there the fine tuning was in finding the right abilities to balance predicting enemies' actions and reacting to changing game circumstances.
Nonconvex optimization has been widely adopted in various domains, including image recognition (Hinton et al., 2012; Krizhevsky et al., 2012), Bayesian graphical models (Jordan et al., 2004; Attias, 2000), recommendation systems (Salakhutdinov et al., 2007), etc. Despite the fact that solving a nonconvex problem is generally difficult, empirical evidences have shown that simple first order algorithms such as stochastic gradient descent (SGD), are able to solve a majority of the aforementioned nonconvex problems efficiently. The theory behind these empirical observations, however, is still largely unexplored. In classical optimization literature, there have been fruitful results on characterizing the convergence of SGD to first-order stationary points for nonconvex problems. However, these types of results fall short of explaining the empirical evidences that SGD often converges to global minima for a wide class of nonconvex problems used in practice. More recently, understanding the role of noise in the algorithmic behavior of SGD has received significant attention. For instance, Jin et al. (2017) show that a perturbed form of gradient descent is able to escape from strict saddle points and converge to second-order stationary points (i.e., local minima). Zhou et al. (2019) further show that noise in the update can help SGD to escape from spurious local minima and converge to the global minima.
Ahead of Sunday's match between the Kansas City Chiefs and Tampa Bay Buccaneers, you can watch the ad Amazon will air during the Super Bowl. Titled Alexa's Body, it features an Amazon employee and the company's new $100 Echo model. Oh, and Black Panther star Michael B. Jordan makes an appearance too. The ad starts with the fictional employee praising the design of Amazon's latest smart speaker. "I literally couldn't imagine a more beautiful vessel for Alexa to be... inside," they say of the 2020 Echo, her train of thought drifting off as a bus pulls up outside, its side plastered with an ad for Jordan's new Prime Video series, Without Remorse.
Michael B. Jordan is many things. He's an actor, film producer, and director -- to name a few -- but more recently, he's a beautiful vessel for Amazon's voice assistant, Alexa. Amazon pulled out the big guns for the big game and landed Jordan to star in "Alexa's Body," a minute-long Super Bowl ad that imagines the actor as an ideal vessel for the AI technology. I mean, take a minute to really think about how enjoyable asking Alexa things like "how many tablespoons are in a cup," "turn on the sprinklers," "dim the lights," "add bath oil to my shopping list," or "read my sensual audiobook to me while I take a candle-lit bath" would be if Alexa looked like Jordan instead of a sad little piece of tech sitting on your countertop. There's plenty of controversy around Amazon and their smart devices, but we can't deny that this is a good ad.
We already know deepfakes are all over the place these days, and the technology associated with them is advancing rapidly. But how easy is it to create a digital replica of somebody? And could it be done on a budget? That's the question YouTuber Tom Scott set out to find the answer to in his latest video, for which he challenged AI in neuroscience researcher Jordan Harrod to create a fake version of him for $100. "This isn't a face replacement or a body double," Scott explains at the start of the video.