We know content should be valuable, comprehensive, new, relevant, and accurate. Even more fundamentally important than all of those things is that it needs to be authentic. People trust factual information presented with sincere intentions. This era of "fake news" has ushered in a lot of fear for this very reason; we have had to fight to build the authority of our pages and domains to signal that we are worthy of trust. But our industry has yet to face its biggest challenge.
Since the birth of the field of Artificial Intelligence (AI) researchers worked on creating ever capable machines, but with recent success in multiple subdomains of AI [1-7] safety and security of such systems and predicted future superintelligences [8, 9] has become paramount [10, 11]. While many diverse safety mechanisms are being investigated [12, 13], the ultimate goal is to align AI with goals, values and preferences of its users which is likely to include all of humanity. Value alignment problem , can be decomposed into three sub-problems, namely: personal value extraction from individual persons, combination of such personal preferences in a way, which is acceptable to all, and finally production of an intelligent system, which implements combined values of humanity. A number of approaches for extracting values [15-17] from people have been investigated, including inverse reinforcement learning [18, 19], brain scanning , value learning from literature , and understanding of human cognitive limitations . Assessment of potential for success for particular techniques of value extraction is beyond the scope of this paper and we simply assume that one of the current methods, their combination, or some future approach will allow us to accurately learn values of given people.
If you are already using a pre-curated dataset, such as Labeled Faces in the Wild (LFW), then the hard work is done for you. You'll be able to use next week's blog post to create your facial recognition application. But for most of us, we'll instead want to recognize faces that are not part of any current dataset and recognize faces of ourselves, friends, family members, coworkers and colleagues, etc. To accomplish this, we need to gather examples of faces we want to recognize and then quantify them in some manner. This process is typically referred to as facial recognition enrollment.
Ansible 2 is out, and that means it's time to upgrade the previous article on Running Ansible Programmatically for Ansible 2, which has significant API changes under the hood. At work, we are spinning up hosted trials for a historically on-premise product (no multi-tenancy). To ensure things run smoothly, we need logging and reporting of Ansible runs while these trials spin up or are updated. Each server instance (installation of the application) has unique data (license, domain configuration, etc). Running Ansible programmatically gives us the most flexibility and has proven to be a reliable way to go about this.
Deepfakes are video manipulations that can make people say seemingly strange things. Barack Obama and Nicolas Cage have been featured in these videos. It used to take a lot of time and expertise to realistically falsify videos. For decades, authentic-looking video renderings were only seen in big-budget sci-fi movies films like "Star Wars." However, thanks to the rise in artificial intelligence, doctoring footage has become more accessible than ever, which researchers say poses a threat to national security.