If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Even if you're not a gadget geek, you likely know whether your laptop is powered by an Intel chip or one from a competitor like AMD. The sticker plastered next to your keyboard won't let you forget. But even if you know your Ryzens from your Ice Lakes, you probably don't put much thought into who makes the memory chips that store your data and keep your laptop and smartphone working. There's a decent chance at least one of your gadgets includes memory made by a company called Micron Technology. Boise, Idaho-based Micron is one of only three outfits that still make DRAM, the chips that provide short-term memory in personal computers, smartphones, tablets, and other devices.
The science and tech world has been abuzz about quantum computers for years, but the devices are not yet affecting our daily lives. Quantum systems could seamlessly encrypt data, help us make sense of the huge amount of data we've already collected, and solve complex problems that even the most powerful supercomputers cannot – such as medical diagnostics and weather prediction. That nebulous quantum future became one step closer this November, when top -tier journal Nature published two papers that showed some of the most advanced quantum systems yet. If you still don't understand what a quantum computer is, what it does, or what it could do for you, never fear. Futurism recently spoke with Mikhail Lukin, a physics professor at Harvard University and the senior author of one of those papers, about the current state of quantum computing, when we might have quantum technology on our phones or our desks, and what it will take for that to happen. This interview has been slightly edited for clarity and brevity.
Think about IT, AI could be everywhere one day. IBM is a big company, obviously. The negative side of that breadth for IBM is the perception (held by some) that it is a big (tough to manoeuvre) ship with so much tradition and history that it might be set in its ways. The positive side of that breadth is that the firm can spend a lot of time developing potentially world-changing (in some cases actually life-saving) innovations in its IBM Research department, some of which may not even appear to be directly initially computing related. If its'paradigm shifting' creations and inventions actually do help create new global processes in areas like human water supply, quantum computing and blockchain (to name three), then these elements of the new IBM could arguably serve to counter the notion of the old IBM.
Over at the Lenovo Blog, Dr. Bhushan Desam writes that the company just updated its LiCO tools to accelerate AI deployment and development for Enterprise and HPC implementations. The newly updated Lenovo Intelligent Computing Orchestration (LiCO) tools are designed to overcome recurring pain points for enterprise customers and others implementing multi-user environments using clusters for both HPC workflows and AI development. LiCO simplifies resource management and makes launching AI training jobs in clusters easy. LiCO currently supports multiple AI frameworks, including TensorFlow, Caffe, Intel Caffe, and MXNet. Additionally, multiple versions of those AI frameworks can easily be maintained and managed using Singularity containers.
Tencent, Alibaba, Baidu and JD.com from China are in a global competition with Google/Alphabet, Apple, Facebook, Walmart and Amazon from the USA and SoftBank from Japan. All are agressively searching for talent, intellectual property, market share, logistics and supply chain technology, and presence all around the world. These leading tech-savvy companies have many things in common. Foremost, they are all in pursuit of global growth and the funding, technology and talent to propel that growth. And they all are investing in voice assistance and other forms of AI and robotics.
Deep-Learning-as-a-Service, unveiled at IBM's annual IT industry conference in Las Vegas, seeks to lower barriers to deploying AI and deep-learning tools, a complex and painstakingly repetitive process that requires large amounts of computing power, the company said. The new service allows companies to upload data in Watson Studio, IBM's cloud-native platform for data scientists, developers and business analysts. There, they can create deep-learning algorithms for datasets – known in AI parlance as a "neural network" – using a drag-and-drop interface to select, configure, design and code the network. IBM also has automated the repetitive process of fine-tuning deep-learning algorithms, with successive training runs started, monitored and stopped automatically. For many firms, the complexity of creating smart algorithms from scratch has kept them from leveraging AI to parse massive stores of data for business value, the company said.
Artificial Intelligence (AI) is once again a promising technology. The last time this happened was in the 1980s, and before that, the late 1950s through the early 1960s. In between, commentators often described AI as having fallen into "Winter," a period of decline, pessimism, and low funding. Understanding the field's more than six decades of history is difficult because most of our narratives about it have been written by AI insiders and developers themselves, most often from a narrowly American perspective. In addition, the trials and errors of the early years are scarcely discussed in light of the current hype around AI, heightening the risk that past mistakes will be repeated. How can we make better sense of AI's history and what might it tell us about the present moment?
ORLANDO, Fla. – Speech recognition technologies have improved so much in recent years – thanks to cloud computing and advances in machine learning – that the virtual assistants created by Amazon, Google and Apple have quickly become popular with consumers. So it should come as little surprise that the underlying natural language technology is making inroads at work, too. "I would say that it [enterprise adoption] is in early stages now, but there are certainly basic capabilities here today," Jon Arnold, of J Arnold & Associates, said at the Enterprise Connect conference last week. The main uses for speech recognition in the office will, at least at first, revolve around improving employee productivity and automating workflows. Thanks to advances in artificial intelligence (A.I.) techniques, the accuracy of speech recognition systems has improved significantly, with Google and others passing the 95% accuracy mark.
For newbies this is the best place to start; introductions, FAQs and a glossary of terms. Information on the different types of learning algorithms used in AI and ML systems and applications. A list of different software tools, used to simulate AI techniques, both free open source and commercial. A list of free data sets that can be used for research and testing of AI learning algorithms. Find out how different hardware can be used to host and accelerate the performance of AI applications.
Where some businesses are employing artificial intelligence to sell you more, IBM is using it to sell you less. Specifically, it's employing one set of AI tools to minimize the amount of compute time on its cloud services you need to buy in order to train another set of AI tools to run your business. That will also allow IBM's customers to make the most of another scarce and expensive resource, AI expertise, according to Ruchir Puri, Chief Architect for IBM Watson and an IBM Fellow. "We're lowering the barrier to entry for machine learning capabilities for enterprise," Puri said. The barrier Puri is talking of is the scarcity of human expertise in deep learning, a way of training an artificial intelligence in a particular domain of expertise.