USC Viterbi Professors Burcin Becerik-Gerber and Gale Lucas launch CENTIENTS, a center aimed at fostering research and collaboration toward human-centered design and integration of intelligent technologies into built environments. In the 1960s cartoon The Jetsons, the future was a world full of self-driving cars and sassy, meticulous robots. Individuals, like the patriarch George, could move through space--and the shower--without having to lift a finger. The mechanisms around him played a pivotal role in making decisions on his behalf, based on a learned understanding of his most basic preferences. For a while, this future seemed distant, but upon us now is an unprecedented opportunity to merge human behavior and preferences with automation to create a personalized, dynamic and improved daily reality for individuals at work and at home.
More and more organizations are beginning to use or expand their use of artificial intelligence (AI) tools and services in the workplace. Despite AI's proven potential for enhancing efficiency and decision-making, it has raised a host of issues in the workplace which, in turn, have prompted an array of federal and state regulatory efforts that are likely to increase in the near future. Artificial intelligence, defined very simply, involves machines performing tasks in a way that is intelligent. The AI field involves a number of subfields or forms of AI that solve complex problems associated with human intelligence--for example, machine learning (computers using data to make predictions), natural-language processing (computers processing and understanding a natural human language like English), and computer vision or image recognition (computers processing, identifying, and categorizing images based on their content). One area where AI is becoming increasingly prevalent is in talent acquisition and recruiting.
Disruptive changes to business models are having a profound impact on the employment landscape and will continue to transform the workforce for over the coming years. Many of the major drivers of transformation currently affecting global industries are expected to have a significant impact on jobs, ranging from significant job creation to job displacement, and from heightened labour productivity to widening skills gaps. In many industries and countries, the most in-demand occupations or specialties did not exist 10 or even five years ago, and the pace of change is set to accelerate. Artificial Intelligence (AI) is changing the way companies used to work and how they today. Cognitive computing, advanced analytics, machine learning, etc. enable companies to gain unique experience and groundbreaking insights.
Technology is inherently about humans, and it is perilous to ignore social and psychological impact while creating tech. As engineers we must be aware of the unintended consequences of the technology we create. With the advent of automotive AI and recent impact of social media platforms on elections, Ethics in AI has become one of the major areas of research.Few important (but not limited to) questions in Ethical AI are Algorithmic Bias: ML algorithms trained on biased data reinforce that bias into results and recommendations. Governance in AI:, What are the Labor and Regulation laws relating to automation and robots.[ReadMore] Generative AI: Images and Videos now created by Algorithms (GANs) are virtually indistinguishable from real ones.This is leading to widespread fake news dissemination.Checkout this popular video where Barack Obama is speaking words he has never uttered in real life Fast.ai
We are living in interesting times, where digital assistants schedule meetings, chatbots work alongside humans as teaching assistants, and your suitcase can now become self driving luggage as showcased at CES, 2018. The implications are just starting to be felt in the workplace. In 2017, I wrote about how The Employee Experience is the Future of Work. Now, as we enter 2018, the next journey for HR leaders will be to leverage artificial intelligence combined with human intelligence and create a more personalized employee experience. As we increase our personal usage of chatbots (defined as software which provides an automated, yet personalized, conversation between itself and human users), employees will soon interact with them in the workplace as well.
HONG KONG (Reuters Breakingviews) - Artificial intelligence doesn't hate you, prominent researcher Eliezer Yudkowsky wrote, "nor does it love you, but you are made of atoms which it can use for something else". This sets the scene for Tom Chivers' fascinating new book, which borrows its title from the quote, on why so-called superintelligence should be viewed as an existential threat potentially greater than nuclear weapons or climate change. The "strange, irascible and brilliant" Yudkowsky is a central figure throughout the book. His early musings on the potential and dangers of artificial intelligence during the mid- to late-2000s gave birth to the Rationalist movement, a loose community dedicated to AI safety. Chivers, a former science journalist with Buzzfeed and the Telegraph, offers a meticulously researched investigation into who the Rationalists are, and more importantly why they believe humanity is fast approaching an inflection point between "extinction and godhood".
Most of us do not have an equal voice or representation in this new world order. Leading the way instead are scientists and engineers who don't seem to understand how to represent how we live as individuals or in groups--the main ways we live, work, cooperate, and exist together--nor how to incorporate into their models our ethnic, cultural, gender, age, geographic or economic diversity, either. The result is that AI will benefit some of us far more than others, depending upon who we are, our gender and ethnic identities, how much income or power we have, where we are in the world, and what we want to do. The power structures that developed the world's complex civic and corporate systems were not initially concerned with diversity or equality, and as these systems migrate to becoming automated, untangling and teasing out the meaning for the rest of us becomes much more complicated. In the process, there is a risk that we will become further dependent on systems that don't represent us.