As a combat veteran and more recently an industry technologist and university professor, I have observed with concern the increasing automation--and dehumanization--of warfare. Sarah Underwood's discussion of autonomous weapons in her news story "Potential and Peril" (June 2017) highlighting this trend also reminded me of the current effort to update the ACM Code of Ethics, which says nothing about the responsibilities of ACM members in defense industries building the software and hardware in weapons systems. Underwood said understanding the limitations, dangers, and potential of autonomous and other warfare technologies must be a priority for those designing such systems in order to minimize the "collateral damage" of civilian casualties and property/infrastructure destruction. Defense technologists must be aware of and follow appropriate ethical guidelines for creating and managing automated weapons systems of any kind.
Hence, the mission of the AI Post (and The American Institute of Artificial Intelligence) is: To advance artificial intelligence safely and responsibly. Not from the perspective of blatant marketing, not just for the consumption of scientists, not to create hype or fear or panic – but to educate the public and governments in a responsible, objective and sophisticated manner, to help develop the science, to remove the obstacles, to fill in the gaps, and to ensure that artificial intelligence is advanced in a safe and responsible manner where the interests of human and biological lifeforms are protected. For example, while maximizing the adoption of the technology will drive us to help increase consumption of the AI products and services –minimizing the social, economic, and political costs may lead us to recommend regulation, control, and curtailment strategy. The AI Post Editorial Guidelines The AI Post Submission Guidelines Our Editorial Framework How to Become a Contributor?
We'll be live streaming both the events on YouTube, so if you aren't able to make it, do watch the live streams (YouTube lets you set a reminder): The Fifth Elephant and Anthill Inside expose you to trends in data science, deep learning and artificial intelligence. Let's walk through the schedules for both events: At this point, we'd like to make a special mention about our diversity sponsor -- Intuit India -- for sponsoring child care facilities at The Fifth Elephant, and Anthill Inside. We'd like to talk about community for women and non-binary gender data scientists, the problems we are solving in the field, and how we can foster more diversity in data science. On that note, Intel has created a developer portal for ML engineers, data scientists and students with resources on optimized frameworks, and training for artificial intelligence, machine learning, and deep learning.
MEPs have approved rules for keeping humans firmly in charge of Artificial Intelligence (AI). They have also called for ethical standards to be built in to AI algorithms and robots that work for humans, and standardisation across Member States to ensure a level playing field for technology companies. The people of Europe can also have a say: Parliament's Legal Affairs committee has opened an online public consultation that lasts until the end of April 2017. In my interview with her, Rapporteur Mady Delvaux Sèhres insists that MEPs are not trying to stop technological advances nor stifle innovation: "The European Parliament thinks there should be ethics by design… I understand that it will be difficult and time consuming but Parliament hopes by standardisation to prevent unethical robots from coming to the market" Delvaux's committee has developed ethical principles relating to human rights to safety, privacy, integrity, dignity, autonomy and data ownership They include creating legal liability and insurance for driverless vehicles and compensation for victims when they go wrong.
I recently published and presented a paper at CHI 2017 (the annual ACM Conference on Human Factors in Computing Systems, https://chi2017.acm.org) This paper won an Honorable Mention award at the conference. Here's a summary of the project. There is now tremendous momentum behind initiatives to teach computer programming to a broad audience, yet many of these efforts (for example, Code.org, In contrast, I wanted to study the other end of the age spectrum: how older adults aged 60 and over are now learning to code. Because this population is already significant and also quickly growing as we all (hopefully!) continue to live longer in the coming decades. The United Nations estimates that by 2030, 25% of North Americans and Europeans will be over 60 years old, and 16% of the worldwide population will be over 60. There has been extensive research on how older adults consume technology, and some studies of how they curate and produce digital content such as blogs and personal photo ...
The report investigated how many travel brands have used Facebook Messenger to deliver customer service, and how many bookings were secured. Nearly two-thirds of airline brands (64.1%) are responding to customers within 24 hours, ahead of hotels, airlines and car rental companies in that order. Just under half of online travel agencies (OTAs) provided assistance for booking through a Messenger chatbot, compared to 18.8% of car rentals, 15.2% of hotels, and 8.7% of airlines. So as Facebook reports earnings this week, travel brands should be looking to read between the lines to understand where one of the world's three most valuable internet companies is headed next.
Sure, it's not the awe-inspiring, interactive experiences that leave people speechless, but it's by far the easiest way to turn people onto the possibilities of virtual reality. Interactive virtual reality content powered on headsets like the Rift or Vive will not deliver the same amount of views as those watched on mobile HMDs anytime soon. This camera solution uses Google's AI algorithms to stitch the individual videos by pinpointing patterns after uploading to their servers. Which is great for those people entering the industry at the stitching level.
Today, we live in a time characterized by rapid technology transformation, and resulting social, political, and economic disruption. In a rapidly changing business and technology environment, where the ability to act with speed can be the foundation for innovation and acceleration of business opportunities, firms increasingly recognize that business as usual may not be a prudent path. At the MIT Symposium, Brynjolfsson and McAfee spoke about "interconnected humanity" and the impact of rapid disruption and change on human lives, remarking at one point on the sharp rise in the rate of "deaths from despair" as many in society our unprepared for dislocation and disconnection in the social, economic, and political worlds. He is a contributor to Forbes, Harvard Business Review, MIT Sloan Management Review, and The Wall Street Journal, and is Founder and Executive Director of the Big Data for Social Justice Foundation.
People love to talk about artificial intelligence tools' potential to make content management and CRM work easier, better and more efficient. AI projects require continuous human monitoring, or its results degrade into something at best irrelevant to the business and at worst detrimental to business goals and harmful to customers. In content management systems, AI can police network activity in both sales and service data stores for potential fraud and report suspicious customer activity to experts who can take a closer look. "That's where the hard part is," said Thomas Dong, OpenText's vice president of product marketing, acknowledging that product marketers sometimes do simplify AI project requirements because the processes around advanced analytics aren't always easy to explain.