Being a board member is a hard job -- ask anyone who has ever been one. Company directors have to understand the nature of the business, review documents, engage in meaningful conversation with CEOs, and give feedback while still maintaining positive relationships with management. These are all hard things to balance. But, normally, boards don't have to get involved with individual operational projects, especially technical ones. In fact, a majority of boards have very few members who are comfortable with advanced technology, and this generally has little impact on the company.
"You want to ride the wave rather than getting slammed by its disruption. You don't want to be Blockbuster Video or Sears, you want to be Netflix or Amazon." That was how Dave Bluey, assistant professor of practice and career advisor with the Department of Management, explained the reasoning behind the Department of Management's symposium, "How Artificial Intelligence Will Impact Your Career." Over 250 hundred students gathered for a panel discussion led by industry experts to hear about – and in some cases see – the impact artificial intelligence may have on their future careers. The event was a partnership between the Management Department and leading firms in the areas of machine learning, artificial intelligence, and robotics in an on-going Digital Transformation Series at Virginia Tech.
Sometime last fall, a man walked into the Nike store in Pasadena, California. He was a runner, it was a running-centric store, and he was there to buy a pair of running shoes just like the ones he had worn in the past. The clerk asked him if he'd be willing to have his feet measured a new way. "I'm a 9," the runner said. "I've always been a 9. Just give me a 9." Still, he relented.
Many agencies are still in the early stages of collecting data, and learning how to use it to drive value through AI automation. Yet, a 2018 study found 49% percent of those surveyed agreed artificial intelligence (AI) and automation will change the way we work. And 31% believed they'd already seen the benefits. But organizations are also dealing with the fear that AI and automation could cause job loss across many sectors. Technology advances will undoubtedly alter current roles, as well as create new ones.
A new startup has created an artificial intelligence system capable of mimicking voices that are unprecedentedly close to the real thing. In a video from Dessa, an AI company staffed by former employees of Google, IBM, and Microsoft, multiple audio clips demonstrate a machine-learning software that parrots the voice of popular podcaster, Joe Rogan to a degree that's almost indiscernible from the real thing. In the clips, the computer-generated Rogan muses on topics like chimpanzee's who can play hockey; it pulls off some adept tongue-twisters; and it even pontificates theories about how we're all living in a simulation, which as noted by The Verge, are some of Rogan's favorite topics. Joe Rogan is one of the most popular podcasters in the world, giving AI plenty of data to choose from when trying to mimic the host's voice In a response, even Rogan himself called the demonstration'terrifyingly accurate' reports CNET. What makes the demonstration more intriguing, or perhaps scary, according to Dessa is that software like the one demonstrated channeling Rogan could soon be commonplace.
Say hello to Joe Rogan: podcaster, entertainer of problematic views, and man who believes that feeding his all chimp hockey team a diet of bone broth and elk meat will give them the power to rip your balls off. Or, at least that's what the unaware listener might believe after listening to an entirely AI-generated clip of the popular podcaster. Unlike Rogan's typical totally coherent rants, this one is a total fabrication. "The replica of Rogan's voice the team created was produced using a text-to-speech deep learning system they developed called RealTalk," explained the researchers behind the clip in a blog post, "which generates life-like speech using only text inputs." This obviously calls to mind deepfakes, the video editing tech that can convincingly edit videos to make it look like people did or said things they in fact did not.
Organizers of the 2020 Tokyo Olympic and Paralympic Games announced plans Wednesday to launch robots from the "Mobile Suit Gundam" anime series into space aboard a satellite that will broadcast messages of support to athletes. In the project, conducted in collaboration with the Japan Aerospace Exploration Agency and the University of Tokyo, two 10-centimeter models depicting Gundam and Char's Zaku robots from the animation series will be sent into orbit on a 30-cm long, 10-cm wide microsatellite. The "G Satellite," with an electronic bulletin board for displaying messages, will be sent to the International Space Station aboard a supply ship next March and later launched from the ISS. After the satellite enters the Earth's orbit, it will deploy the robots and the bulletin board. The organizers of the project will then share images taken with an onboard camera, including congratulatory messages in multiple languages, with athletes through social media and other outlets.
But they'll have nothing on Rikard Grönborg, the head coach of Sweden's national hockey team who will log over 400 consecutive hours during the Ice Hockey World Championships in Slovakia … sort of. Using advanced 3D and voice technology, agency Perfect Fools created a virtual Grönborg to report live on YouTube 24 hours a day through the duration of the tournament. The coach spent hours in front of the camera so the virtual anchor could learn his voice and mannerisms. Additionally, 20 years of hockey data was analyzed so the fake Grönborg could make predictions for all of the tournament's games. The end product does look and sound a little robotic but in all fairness, it's an ambitious project, and the technology is still somewhat nascent.
This summer, furniture company Kartell will start selling a new plastic chair designed by Philippe Starck – with some help. The system used – not, perhaps, strictly an AI – was a generative design software platform from Autodesk. Supplied with initial design goals, along with parameters such as materials, manufacturing methods and cost constraints, the software explores all the possible permutations of a solution to generate design alternatives. It tests and learns from each iteration what works and what doesn't. "As the relationship between the two matured, the system became a much stronger collaborative partner, and began to anticipate Starck's preferences and the way he likes to work," says Mark Davis, senior director of design futures at Autodesk.
A MailOnline investigation into how much personal information Alexa is recording and storing on its users has revealed the smart assistant eavesdrops on housemates' gossip, private conversations about insurance policies - and even the family dog. Amazon insists Alexa can only be activated when the allocated'wake word' is uttered - being Alexa, Computer or Echo. The tech giant - along with Apple's Siri and, until recently, Google's Assistant - says it saves every single interaction a person has with the device to improve the service - with some'unintentional' snippets also being recorded if it mistakes another noise for a'wake word'. However, evidence seen by MailOnline shows this cannot be the case, or the process is fundamentally flawed, as a host of sounds and conversations were recorded without a clear or legitimate wake word being uttered - some when there was not even a human nearby. A MailOnline investigation into these'secret' archives has revealed an eerie snippets of users' friends, families and children being recorded while they were completely unaware - and without a clear or legitimate wake word being uttered.