If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
The 2010 decade sure had its challenges, but one positive change was the leap in technology capabilities. This holds true not only for consumers, but also for marketers. For example, the proliferation of smartphones with powerful cameras, loads of apps and high-bandwidth mobile networks has changed how we communicate and share ideas. Marketers can promote events on the go, livestream sessions, share pictures and essentially keep people informed -- globally and in real time. In short, we now have a multimedia studio in our hands.
Nowadays there's been an increasingly high interest for investment in HR technology. Study carried out by CB Insights (2016) revealed that over $1.96 billion have been invested in start-ups that exclusively dealt with HR tech. However, developments in technology require continuous workplace changes. Automation and artificial intelligence are among those tech practices that allow companies to become the definition of efficiency, high performance and cost-effectiveness. While some worry about people losing their jobs to "superior" robots, others are optimistic that with technology we can all achieve greater things.
In response to the coronavirus health crisis, USC researchers have made a hard pivot, adapting labs and lessons learned from treating other diseases to help check the virus and save lives. At their disposal are numerous technologies that give a human advantage, despite the fast-break spread of COVID-19 once it exited central China and spread across the globe. The disease has afflicted thousands of Californians and poses a serious risk to public health and the world economy. Tools such as supercomputers, software apps, virtual reality, big data and algorithms are now in play. They are using the tools to find ways to search and destroy coronavirus DNA, turn smartphones into personal protection devices and use people-friendly simulators to help cope with the crush of medical cases.
Wait, how did Netflix know I wanted to watch that? Through the use of Machine Learning, Collaborative Filtering, NLP and more, Netflix undertake a 5 step process to not only enhance UX, but to create a tailored and personalised platform to maximise engagement, retention and enjoyment. In the last decade, learning algorithms and models at Netflix have evolved with multiple layers, multiple stages and nonlinearities. This has developed to the stage at which they now use machine learning and deep variants to rank large catalogues of content by determining the relevance of each of their titles to each user, creating a personalized content strategy. Not only is the content customized, it is then also ranked from most to least likely to be watched.
Facebook announced that it is releasing DeepFovea, a new state-of-the-art foveate rendering using AI technology. Engineers at the Facebook Reality Labs have come up with an imagery assistant for creating a "plausible peripheral image" rather than the actual peripheral imagery, which in reality is hazy and unfocused as the gaze is focused on something else. This image rendering is called Foveated Reconstruction, which is done by a 14 times compression of pixels on the RGB (Red, blue, Green) video without compromising on the quality, and which is realistic and gaze-contingent. DeepFovea is one of the first generative adversarial network (GAN) able to produce natural video sequences, say the facebook developers of the technology. "DeepFovea can decrease the amount of compute resources needed for rendering by as much as 10-14x while any image differences remain imperceptible to the human eye," according to Facebook.
While exploring a worn-down warehouse, I look through a window and see a room full of zombies. Headcrabs -- disgusting parasites that turn their hosts into monsters -- twitch atop the heads of three former humans. I'll just open the door, toss in a grenade, and mop up any survivors. I remove the pin and grab the handle. The door is locked, leaving me with a live grenade and nowhere to toss it.
What do we actually mean when we refer to interactive technology as "intelligent"? To answer this question we conducted a data-driven literature analysis. Here I share the key insights from our paper, relevant for those involved in creating (intelligent) user interfaces. This will give you a communication tool, for example to help you clarify what's intelligent about your UI/product in discussions among interdisciplinary teams and various stakeholders. First I tell you what we did not do: Trying to come up with another definition of AI, intelligence and so on.
Cattle farmers have been incorporating new technologies into their management of cows for years now, using everything from facial recognition to milking robots. But the internet went wild in late November when a story about Russian farmers using virtual reality goggles on cows went viral. While that story was treated with a fair amount of skepticism from farmers and experts, it did bring a spotlight to the many ways cattle farmers are using technology to reduce the carbon footprint of cows and make farm management more sustainable. "Cows are one of the most important areas that we need to improve tech applications to, principally because on a global agricultural systems basis, cows are our single best source of recycling waste nutrients," said David Hunt, co-founder of Cainthus, an agritech company, based in Dublin, California and Ottawa, focusing on digitizing agricultural practices with computer vision and AI. "The criticism of cows that is valid is the methane emissions that go with cows and one of the most important areas in agricultural tech is reducing those methane emissions."
While research in the field of robotics has led to significant advances over the past few years, there are still substantial differences in how humans and robots handle objects. In fact, even the most sophisticated robots developed so far struggle to match the object manipulation skills of the average toddler. One particular aspect of object manipulation that most robots have not yet mastered is reaching and grasping for specific objects in a cluttered environment. To overcome this limitation, as part of an EPSRC-funded project, researchers at the University of Leeds have recently developed a human-like robotic planner that combines virtual reality (VR) and machine learning (ML) techniques. This new planner, introduced in a paper pre-published on arXiv and set to be presented at the International Conference on Robotics and Automation (ICRA), could enhance the performance of a variety of robots in object manipulation tasks.
I am observing what may be the future of work in a San Francisco skyscraper, watching as a transparent, legless man in a T-shirt hovers above a leather couch. The man is Jacob Loewenstein, the head of business at Spatial, a software company that enables meetings via holograms, which are 3-dimensional images. Though he is in New York, a hologram of him appears a few feet in front of me in San Francisco, his face and slightly tousled hair a 3D likeness of the photo I later look up on LinkedIn, his blue t-shirt a sign that he is as casually dressed as any tech worker. As I turn my head, which is decked in a clunky augmented reality headset, I see a tablet that Loewenstein is holding, which he hands to me. When I try to grab it, though, I end up drawing pink lines through the air instead--I've accidentally enabled a drawing tool in the app instead of the tool that should allow my pinched fingers to grasp an object.