If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Arm aims to take machine learning to mainstream and low-end devices with the launch of its new neural processing units (NPUs). The company is unveiling the Ethos-N57 and Ethos-N37 NPUs, which it will license to chipmakers who can integrate it into their products. The idea is to extend the range of Arm machine learning (ML) processors to enable artificial intelligence (AI) applications in mainstream devices. The company also unveiled the Mali-G57 graphics processing unit (GPU). This is the first mainstream Valhall architecture-based GPU, delivering 1.3 times better performance over previous generations.
If you make a purchase by clicking one of our links, we may earn a small share of the revenue. However, our picks and opinions are independent from USA TODAY's newsroom and any business incentives. Buying a gaming machine is no longer as challenging as a game of Tetris. Thanks to smaller chips and lighter hardware, now you can buy a powerhouse in laptop form that will optimize gaming experience without having to piece together the hardware yourself. Gaming laptops have made PC gaming so much more accessible, and you can easily find machines with high-resolution displays and the latest graphics cards without breaking the bank. We've done the hard work for you, researching and testing out the top options on the market. After weeks of testing we think the Alienware M15 with an Nvidia GeForce 2070 is the best overall for people who want a high-performance machine. If you need something a little more budget-friendly, the Acer Nitro 7 can handle just about any current game as well for under $1,000.
WASHINGTON, D.C.--(BUSINESS WIRE)--At the Association for the United States Army (AUSA) conference today, General Micro Systems (GMS) announced that its new S422-SW and X422 combination has been chosen for two new military development programs. The system pair brings a massive amount of server processing power, 10/40/100 Gigabit networking ports for sensors, and general-purpose graphics processing unit (GPGPU) artificial intelligence (AI) onto the battlefield for the first time in two small "shoebox-sized" rugged chassis designed to survive the harshest conditions where regular rackmount servers cannot. The two programs that selected the S422-SW "Thunder" and X422 "Lightning" combo will deploy it in mobile platforms to move IP-based sensor data instantaneously over multi-sensor LANs into the server and AI processor. Once processed, the server reports out to operators information that can help maneuver a vehicle or UAS in real-time, calculate a fire control solution for a weapon, or identify threats such as stationary IEDs or incoming objects such as projectiles. "The tremendous processing power of this combo makes it a highly attractive option for these two development programs as well as others creating autonomous, self-driving or self-piloting vehicles," said Ben Sharfi, chief architect and CEO, General Micro Systems.
"The tantalizing promise of quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor. A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here, we report using a processor with programmable superconducting qubits to create quantum states on 53 qubits, occupying a state space 253 1016. Measurements from repeated experiments sample the corresponding probability distribution, which we verify using classical simulations. While our processor takes about 200 seconds to sample one instance of the quantum circuit 1 million times, a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task. This dramatic speedup relative to all known classical algorithms provides an experimental realization of quantum supremacy on a computational task and heralds the advent of a much-anticipated computing paradigm." It is fascinating to consider what will happen next in the intersection of quantum information and artificial intelligence. It is also hard to tell where it will lead, perhaps a new computing paradigm?
Autonomous robots are complicated machines. There are many different systems needed for a robot to understand its environment, learn from it, and then take actions within that environment. Just one of those components can take time to understand, develop and integrate together. The earliest self-driving cars required racks of servers running Intel Xeon processors to be able to understand and navigate within their environments. While autonomous robots don't usually have the same level of constraints as a self-driving car, they do operate in a similar problem space.
Alibaba is well aware of the growing demand for dedicated compute to power today's AI applications. Last year, the Hangzhou-based tech giant launched its semiconductor subsidiary Pingtouge ("Honey Badger" in Chinese) to develop embedded chip and neural network accelerators. At the time, Alibaba CTO Jeff Zhang pledged Pingtouge would produce the world's most advanced neural network chip by the middle of this year. Today, Alibaba kept its promise. At the Alibaba Cloud (Aliyun) Apsara Conference 2019, Pingtouge unveiled its first AI dedicated processor for cloud-based large-scale AI inferencing.
Though the growth of the PC market has slowed, high-performance computers are needed by gamers, Santhosh Viswanathan says. Intel will now convert from a PC-centric company to one that is data-centric in accordance with the growing data business, he said. The company has already moved in that direction by tripling its staff in Thailand to collaborate with local allies in order to reach customers in every segment of the industry and has shifted innovative development to meet market demand. Since Thailand is an important market and has many opportunities, Intel will prioritise the growth of the whole ecosystem and focus on customers in the IoT industry and e-commerce businesses, particularly business-to-business companies, he said. Moreover, Intel will study new technology markets by supporting video game service providers and computer manufacturers in e-sports tournaments, the managing director said.
Big data center operators say they are seeing a steady stream of new architectures for accelerating deep learning neural networks--and the flow is just getting started, according to comments at last week's AI Hardware Summit. One analyst pegged the number of established and startup companies designing AI accelerators at a whopping 130. "The machine-learning revolution has reopened the opportunity for new architectures…let a thousand flowers bloom," said Alphabet Chairman and former Stanford President John Hennessy in an opening keynote at the event. Such domain-specific chips don't have to be compatible with legacy object code so the industry "can introduce new architectures faster than in general-purpose computing," he added. Potential users from Alibaba, Facebook, Google, and Uber said the chip vendors need to show their benchmark scores, make their software easy to use, and conform to emerging standards. "We are sampling a few vendors' upcoming products, and one issue is using their software correctly…it takes a long time to vet hardware and a lot of time to bring new software into our ecosystem," said Linjie Xu [[CQ]], director of applied AI architecture at Alibaba Cloud, speaking on a panel.
The second AI HW Summit took place in the heart of Silicon Valley on September 17-18, with nearly fifty speakers presenting to over 500 attendees (almost twice the size of last year's inaugural audience). While I cannot possibly cover all the interesting companies on display in a short blog, there are a few observations I'd like to share. Computer architecture legend John Hennessy, Chairman of Alphabet and former President of Stanford University, set the stage for the event by describing how historical semiconductor trends, including the untimely demise of Moore's Law and Dennard scaling, led to the demand and opportunity for "Domain-Specific Architectures." This "DSA" concept applies not only to novel hardware designs but to the new software architecture of deep neural networks. The challenge is to create and train massive neural networks and then optimize those networks to run efficiently on a DSA, be it a CPU, GPU, TPU, ASIC, FPGA or ACAP, for "inference" processing of new input data.
WIRE)--General Micro Systems (GMS), the rugged C4ISR mobile systems and servers company, today announced the industry's smallest, lightest and most SWaP-C-optimized workstation, display and general-purpose graphics processing unit (GPGPU) artificial intelligence (AI) algorithm and video processor. At only seven pounds and 9.8 inches x 5.4 inches x 2.3 inches, the ultra-rugged S1202-XVE Peacock III enables the near real-time video processing of large amounts of high-quality images, video or sensor data for immediate and accurate analysis--right on the battlefield. This powerful processing performance makes the system ideal for military applications in harsh environments such as airborne reconnaissance, autonomous vehicles, wide-body C4ISR platforms, multi-console displays and other areas of modern warfare. When equipped with third-party software algorithms, the S1202-XVE compresses, trans-codes, transmits and stores live video and sensor data over IP-based terrestrial or satellite networks with up to 2:1 HEVC compression (compared with AVC) while retaining resolution. The S1202-XVE supports three independent outputs of 4K UHD video, with an additional Nvidia Quadro Pascal GPGPU providing up to eight TFLOPS for algorithm, vector or AI processing in near real time.