Goto

Collaborating Authors

Results


How Artificial Intelligence and Machine Learning Impact Project

#artificialintelligence

In this article, we will discuss what I believe is one of the most significant issues facing the future of project management. Let me start by asking 3 questions. If you're a project manager and don't know the answers to those three questions, I suggest you read further because your career might depend on knowing them. So why is the 5th of December 2017 a significant date for those of us who take even a cursory interest in the development of AI and machine learning or what we call ML? The 5th of December 2017 was a pretty special day; on that day, one computer beat another computer at the Top Chess Engine Championship.


"Just A Few Techie Definitions"

#artificialintelligence

This data/information is entered into one of my published books, “A Few Tech Definitions From A to Z…” I thought I might help as many as I can to understand what all those “techie speak” words, abbreviations, and symbols imply. To the student(s) who


Reclaim Internet Greatness

Communications of the ACM

His concern is warranted and will require us to strike a balance between protecting the democratic and egalitarian values that made the Internet great to begin with while ensuring those values are used for good. The fundamental issue, then, in creating a 21st-century Internet becomes what changes are warranted and who will be responsible for defining and administering them. On the technology dimension, computer scientists and engineers must develop smarter systems for detecting, addressing, and preventing malicious content on the Web. Cerf's argument on behalf of user training is helpful but will not ultimately solve the problem of an untrustworthy, ungovernable, potentially malicious network. I myself recently fell for a phishing attack, which only proves that today's attacks can fool even savvy, experienced users.



Artificial Intelligence: An Historic Perspective

#artificialintelligence

We've discussed artificial intelligence (AI) quite a bit in this column thus far -- and with good reason. AI is currently THE topic in legal tech (although Blockchain is certainly running a close second), and it's almost impossible to carry on an in-depth discussion on the future of the legal industry without mentioning AI. Legal professionals, librarians, and analysts alike have speculated on the rise of the robo-lawyer, the role that increasingly sophisticated machines will play in the practice of law -- and even whether lawyers will cease to exist at some point in the future. Given the way in which AI has penetrated the conversation around legal technology, I think it makes sense to examine AI's larger history. To quote from one of my favorite musicians, Bob Marley: "In this great future, we can't forget our past."


Cognitive collaboration

#artificialintelligence

Although artificial intelligence (AI) has experienced a number of "springs" and "winters" in its roughly 60-year history, it is safe to expect the current AI spring to be both lasting and fertile. Applications that seemed like science fiction a decade ago are becoming science fact at a pace that has surprised even many experts. The stage for the current AI revival was set in 2011 with the televised triumph of the IBM Watson computer system over former Jeopardy! This watershed moment has been followed rapid-fire by a sequence of striking breakthroughs, many involving the machine learning technique known as deep learning. Computer algorithms now beat humans at games of skill, master video games with no prior instruction, 3D-print original paintings in the style of Rembrandt, grade student papers, cook meals, vacuum floors, and drive cars.1 All of this has created considerable uncertainty about our future relationship with machines, the prospect of technological unemployment, and even the very fate of humanity. Regarding the latter topic, Elon Musk has described AI "our biggest existential threat." Stephen Hawking warned that "The development of full artificial intelligence could spell the end of the human race." In his widely discussed book Superintelligence, the philosopher Nick Bostrom discusses the possibility of a kind of technological "singularity" at which point the general cognitive abilities of computers exceed those of humans.2 Discussions of these issues are often muddied by the tacit assumption that, because computers outperform humans at various circumscribed tasks, they will soon be able to "outthink" us more generally. Continual rapid growth in computing power and AI breakthroughs notwithstanding, this premise is far from obvious.


Cognitive collaboration

#artificialintelligence

Although artificial intelligence (AI) has experienced a number of "springs" and "winters" in its roughly 60-year history, it is safe to expect the current AI spring to be both lasting and fertile. Applications that seemed like science fiction a decade ago are becoming science fact at a pace that has surprised even many experts. The stage for the current AI revival was set in 2011 with the televised triumph of the IBM Watson computer system over former Jeopardy! This watershed moment has been followed rapid-fire by a sequence of striking breakthroughs, many involving the machine learning technique known as deep learning. Computer algorithms now beat humans at games of skill, master video games with no prior instruction, 3D-print original paintings in the style of Rembrandt, grade student papers, cook meals, vacuum floors, and drive cars.1 All of this has created considerable uncertainty about our future relationship with machines, the prospect of technological unemployment, and even the very fate of humanity. Regarding the latter topic, Elon Musk has described AI "our biggest existential threat." Stephen Hawking warned that "The development of full artificial intelligence could spell the end of the human race." In his widely discussed book Superintelligence, the philosopher Nick Bostrom discusses the possibility of a kind of technological "singularity" at which point the general cognitive abilities of computers exceed those of humans.2 Discussions of these issues are often muddied by the tacit assumption that, because computers outperform humans at various circumscribed tasks, they will soon be able to "outthink" us more generally. Continual rapid growth in computing power and AI breakthroughs notwithstanding, this premise is far from obvious.


Long Promised Artificial Intelligence Is Looming--and It's Going to Be Amazing

#artificialintelligence

We have been hearing predictions for decades of a takeover of the world by artificial intelligence. In 1957, Herbert A. Simon predicted that within 10 years a digital computer would be the world's chess champion. That didn't happen until 1996. And despite Marvin Minsky's 1970 prediction that "in from three to eight years we will have a machine with the general intelligence of an average human being," we still consider that a feat of science fiction. The pioneers of artificial intelligence were surely off on the timing, but they weren't wrong; AI is coming.


Long Promised Artificial Intelligence Is Looming--and It's Going to Be Amazing

#artificialintelligence

We have been hearing predictions for decades of a takeover of the world by artificial intelligence. In 1957, Herbert A. Simon predicted that within 10 years a digital computer would be the world's chess champion. That didn't happen until 1996. And despite Marvin Minsky's 1970 prediction that "in from three to eight years we will have a machine with the general intelligence of an average human being," we still consider that a feat of science fiction. The pioneers of artificial intelligence were surely off on the timing, but they weren't wrong; AI is coming.


Long Promised Artificial Intelligence Is Looming--and It's Going to Be Amazing

#artificialintelligence

We have been hearing predictions for decades of a takeover of the world by artificial intelligence. In 1957, Herbert A. Simon predicted that within 10 years a digital computer would be the world's chess champion. That didn't happen until 1996. And despite Marvin Minsky's 1970 prediction that "in from three to eight years we will have a machine with the general intelligence of an average human being," we still consider that a feat of science fiction. The pioneers of artificial intelligence were surely off on the timing, but they weren't wrong; AI is coming.