News concerning Artificial Intelligence (AI) abounds again. The progress with Deep Learning techniques are quite remarkable with such demonstrations of self-driving cars, Watson on Jeopardy, and beating human Go players. This rate of progress has led some notable scientists and business people to warn about the potential dangers of AI as it approaches a human level. Exascale computers are being considered that would approach what many believe is this level. However, there are many questions yet unanswered on how the human brain works, and specifically the hard problem of consciousness with its integrated subjective experiences.
The field of machine ethics is concerned with the question of how to embed ethical behaviors, or a means to determine ethical behaviors, into artificial intelligence (AI) systems. The goal is to produce artificial moral agents (AMAs) that are either implicitly ethical (designed to avoid unethical consequences) or explicitly ethical (designed to behave ethically). Van Wynsberghe and Robbins' (2018) paper Critiquing the Reasons for Making Artificial Moral Agents critically addresses the reasons offered by machine ethicists for pursuing AMA research; this paper, co-authored by machine ethicists and commentators, aims to contribute to the machine ethics conversation by responding to that critique. The reasons for developing AMAs discussed in van Wynsberghe and Robbins (2018) are: it is inevitable that they will be developed; the prevention of harm; the necessity for public trust; the prevention of immoral use; such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. In this paper, each co-author addresses those reasons in turn. In so doing, this paper demonstrates that the reasons critiqued are not shared by all co-authors; each machine ethicist has their own reasons for researching AMAs. But while we express a diverse range of views on each of the six reasons in van Wynsberghe and Robbins' critique, we nevertheless share the opinion that the scientific study of AMAs has considerable value.
Life essentials made better and more affordable." These are the types of startups that partner Paul Buchheit said were demoing today at Y Combinator's Winter 2016 Demo Day 2. Yesterday, we covered the first 60 startups from the batch, and picked our 7 favorites. Buchheit went on to say about today's big aspirations, "Those challenges may seem too large or too complex for a startup to solve. But as Kyle and Dan showed us with Cruise, often the hardest problems are the best investments." He was referring the GM's 1 billion acquisition of Cruise, a YC startup that built self-driving car tech. Today, the room was jam packed, with more chairs brought in for rich investors who were forced to sit on the floor yesterday. Buchheit joked about the first YC batch in summer 2005, saying "Back then no one wanted to go to Demo Day." Someone in the crowd yelled, "15 people wanted to go to Demo Day." Now, there are several hundred VCs avidly watching the presentations. Over the past few years, Y Combinator has expanded to accept startups from a much wider range of industries than traditional apps, including biotech, energy, hardware, and international logistics. When we spoke to investors in the past, some worried they might not have the expertise necessary to evaluate these companies. Now, YC President Sam Altman tells me many VCs have "hired other experts" to fill the gaps. He says "it's become fashionable to hire a Chief Science Officer." As a result, Altman believes that when it comes to funding, these alternative startups "seem to be doing just was well if not a little better" than their traditional software batchmates. Spinal Singularity – Better catheter Last year, over 5 million people were catheterized. Spinal Singularity wants to tap into the 2 billion urinary catheter market with a connected catheter that allows you to control the flow of urine by actuating a magnetic valve.
Just five years ago, artificial intelligence-enabled computers could barely recognize images fed to them, much less analyze them anything like people can. But suddenly, they've turned the tables. "In 2011 their error rate was 26 percent," says Jeff Dean, chief of the Google Brain project, which along with other tech giants has helped lead a recent revolution in image recognition as well as speech recognition and self-driving cars. Now, he says, computers' ability to view and analyze images (pictured) exceeds what human eyes can do. "If you'd have told me that would be a possible just a few years ago, I would've never believed you," Dean said during an appearance at a research event in Heidelberg, Germany.
Over the past few years the CES trade show has become a familiar post-holidays pilgrimage for many of the country's biggest marketers. They see the event as a way to get a sneak peek at the latest tech gadgets and technologies that can help them engage with their customers. This year marketing executives from companies such as Coca-Cola, Unilever, Johnson & Johnson, Campbell Soup and PepsiCo Inc. made their way to Las Vegas for the gathering. The convention was jam-packed with everything from self-driving cars to robots that play chess to Procter & Gamble's air-freshener spray that can connect with Alphabet Inc.'s Nest home to automatically release pleasant scents in the home. But there was one category that seemed to especially win over marketers: virtual assistants.