Nowadays there's been an increasingly high interest for investment in HR technology. Study carried out by CB Insights (2016) revealed that over $1.96 billion have been invested in start-ups that exclusively dealt with HR tech. However, developments in technology require continuous workplace changes. Automation and artificial intelligence are among those tech practices that allow companies to become the definition of efficiency, high performance and cost-effectiveness. While some worry about people losing their jobs to "superior" robots, others are optimistic that with technology we can all achieve greater things.
In response to the coronavirus health crisis, USC researchers have made a hard pivot, adapting labs and lessons learned from treating other diseases to help check the virus and save lives. At their disposal are numerous technologies that give a human advantage, despite the fast-break spread of COVID-19 once it exited central China and spread across the globe. The disease has afflicted thousands of Californians and poses a serious risk to public health and the world economy. Tools such as supercomputers, software apps, virtual reality, big data and algorithms are now in play. They are using the tools to find ways to search and destroy coronavirus DNA, turn smartphones into personal protection devices and use people-friendly simulators to help cope with the crush of medical cases.
In comics, television and film, there is almost no hiding from Superman because of his powerful X-ray vision. The famous exception is his inability to see through lead. Nearly 82 years since this superhero first appeared in Action Comics #1 on April 1938, the line between science fiction and reality is blurring fast in China, as more advances in artificial intelligence (AI) technology are being used to help stop the coronavirus from spreading. Roving security staff at Hongyuan Park, part of the Xixi Wetland preserve in Hangzhou in eastern China, now have the power to quickly detect the body temperature of all park visitors from a distance of up to 1 meter, thanks to "non-contact thermal augmented reality" smart glasses supplied by AI start-up Rokid Corp. The company said on Thursday that each smart glass user will be capable of checking the temperature of several hundred people within two minutes – a vast coverage and speed that would make even Superman proud – to eliminate queues at the park entrance.
Facebook announced that it is releasing DeepFovea, a new state-of-the-art foveate rendering using AI technology. Engineers at the Facebook Reality Labs have come up with an imagery assistant for creating a "plausible peripheral image" rather than the actual peripheral imagery, which in reality is hazy and unfocused as the gaze is focused on something else. This image rendering is called Foveated Reconstruction, which is done by a 14 times compression of pixels on the RGB (Red, blue, Green) video without compromising on the quality, and which is realistic and gaze-contingent. DeepFovea is one of the first generative adversarial network (GAN) able to produce natural video sequences, say the facebook developers of the technology. "DeepFovea can decrease the amount of compute resources needed for rendering by as much as 10-14x while any image differences remain imperceptible to the human eye," according to Facebook.
Couch potatoes trying to get in shape could one day be helped along their fitness journey by an ankle exoskeleton that makes it easier and less tiring to run. The robotic device attaches to the ankle of joggers and was found in lab tests to slash energy expenditure by 14 per cent when compared to standard running shoes. It was created by robotics experts at Stanford University and funded in part by sporting behemoth Nike. The engineers behind the project say the equipment currently only works on a treadmill and when the device is hooked up to a machine via cables. However, they are working to make the exoskeleton portable and lightweight and easy to integrate into future running equipment.
While exploring a worn-down warehouse, I look through a window and see a room full of zombies. Headcrabs -- disgusting parasites that turn their hosts into monsters -- twitch atop the heads of three former humans. I'll just open the door, toss in a grenade, and mop up any survivors. I remove the pin and grab the handle. The door is locked, leaving me with a live grenade and nowhere to toss it.
What do we actually mean when we refer to interactive technology as "intelligent"? To answer this question we conducted a data-driven literature analysis. Here I share the key insights from our paper, relevant for those involved in creating (intelligent) user interfaces. This will give you a communication tool, for example to help you clarify what's intelligent about your UI/product in discussions among interdisciplinary teams and various stakeholders. First I tell you what we did not do: Trying to come up with another definition of AI, intelligence and so on.
Cattle farmers have been incorporating new technologies into their management of cows for years now, using everything from facial recognition to milking robots. But the internet went wild in late November when a story about Russian farmers using virtual reality goggles on cows went viral. While that story was treated with a fair amount of skepticism from farmers and experts, it did bring a spotlight to the many ways cattle farmers are using technology to reduce the carbon footprint of cows and make farm management more sustainable. "Cows are one of the most important areas that we need to improve tech applications to, principally because on a global agricultural systems basis, cows are our single best source of recycling waste nutrients," said David Hunt, co-founder of Cainthus, an agritech company, based in Dublin, California and Ottawa, focusing on digitizing agricultural practices with computer vision and AI. "The criticism of cows that is valid is the methane emissions that go with cows and one of the most important areas in agricultural tech is reducing those methane emissions."
While research in the field of robotics has led to significant advances over the past few years, there are still substantial differences in how humans and robots handle objects. In fact, even the most sophisticated robots developed so far struggle to match the object manipulation skills of the average toddler. One particular aspect of object manipulation that most robots have not yet mastered is reaching and grasping for specific objects in a cluttered environment. To overcome this limitation, as part of an EPSRC-funded project, researchers at the University of Leeds have recently developed a human-like robotic planner that combines virtual reality (VR) and machine learning (ML) techniques. This new planner, introduced in a paper pre-published on arXiv and set to be presented at the International Conference on Robotics and Automation (ICRA), could enhance the performance of a variety of robots in object manipulation tasks.
Jacquard started out as a sensor on a denim jacket, where specially woven textile on the sleeve let the wearer control actions on their phone by touching the fabric. Swipe a palm up the sleeve to change music tracks, swipe down to call an Uber. A double-tap during a bike ride would send an ETA to a pair of headphones. But Google's wearable sensor technology is evolving beyond just taps and swipes. The Jacquard sensor, called the Tag, can now be installed into the insole of a shoe, where it can automatically identify a series of physical motions.