If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Robots can do a lot. They build cars in factories. Robotic dogs can, allegedly and a little creepily, make us safer by patrolling our streets. But there are some things robots still cannot do – things that sound quite basic in comparison. "It's a simple thing" for humans, says robotics researcher Joe Davidson.
May 11 (Reuters) - The science-fiction is harder to see in Google's second try at glasses with a built-in computer. A decade after the debut of Google Glass, a nubby, sci-fi-looking pair of specs that filmed what wearers saw but raised concerns about privacy and received low marks for design, the Alphabet Inc (GOOGL.O) unit on Wednesday previewed a yet-unnamed pair of standard-looking glasses that display translations of conversations in real time and showed no hint of a camera. The new augmented-reality pair of glasses was just one of several longer-term products Google unveiled at its annual Google I/O developer conference aimed at bridging the real world and the company's digital universe of search, Maps and other services using the latest advances in artificial intelligence. "What we're working on is technology that enables us to break down language barriers, taking years of research in Google Translate and bringing that to glasses," said Eddie Chung, a director of product management at Google, calling the capability "subtitles for the world." Selling more hardware could help Google increase profit by keeping users in its network of technology, where it does not have to split ad sales with device makers such as Apple Inc (AAPL.O)and Samsung Electronics CO (005930.KS)that help distribute its services.
Silicon Valley CEOs usually focus on the positives when announcing their company's next big thing. In 2007, Apple's Steve Jobs lauded the first iPhone's "revolutionary user interface" and "breakthrough software." Google CEO Sundar Pichai took a different tack at his company's annual conference Wednesday when he announced a beta test of Google's "most advanced conversational AI yet." Pichai said the chatbot, known as LaMDA 2, can converse on any topic and had performed well in tests with Google employees. He announced a forthcoming app called AI Test Kitchen that will make the bot available for outsiders to try.
In brief Miscreants can easily steal someone else's identity by tricking live facial recognition software using deepfakes, according to a new report. Sensity AI, a startup focused on tackling identity fraud, carried out a series of pretend attacks. Engineers scanned the image of someone from an ID card, and mapped their likeness onto another person's face. Sensity then tested whether they could breach live facial recognition systems by tricking them into believing the pretend attacker is a real user. So-called "liveness tests" try to authenticate identities in real-time, relying on images or video streams from cameras like face recognition used to unlock mobile phones, for example.
Babak Hodjat is the CTO for AI at Cognizant where he leads a team of developers and researchers bringing advanced AI solutions to businesses. Babak is the former co-founder and CEO of Sentient, responsible for the core technology behind the world's largest distributed artificial intelligence system. Babak was also the founder of the world's first AI-driven hedge-fund, Sentient Investment Management. Babak is a serial entrepreneur, having started a number of Silicon Valley companies as main inventor and technologist. Prior to co-founding Sentient, Babak was senior director of engineering at Sybase iAnywhere, where he led mobile solutions engineering.
Apple has arguably changed our lives more than any other company in the world during the past two decades or so. But aside from its digital devices such as iPhones, laptops, watches and operating systems, is there another direction it could go? The somewhat tentative answer to that has been transport, in the form of electric self-driving vehicles. Is Apple gearing up to challenge electric vehicle market leader Tesla, and what progress has been made so far? An Apple-branded car has been mooted for some years now, with sporadic reports of progress being made.
Apple's long-awaited Apple Car could have virtual displays on the inside instead of clear windows, according to a new patent. The tech giant has filed a patent for a virtual reality (VR) vehicle system that matches up'virtual views' with the physical motion of a car as it's travelling. For example, if the car was careering down a hill, the system could project a virtual image of a rollercoaster ride. Chairs in the car would move about to match the visuals, the patent suggests, much like an immersive '4DX' cinema experience. But it would mean passing views of the real world – such as a beautiful medieval cathedral or stunning coastal hills – would be entirely replaced with virtual graphics.
Today, May 19, is Global Accessibility Awareness Day, and Apple has announced several new accessibility features to mark the occasion. For people who are blind or have low vision, Apple has a feature called Door Detection, which is designed to help users locate a door when arriving at a new destination. The feature works on iPad and iPhone models with the LiDAR scanner and combines the LiDAR with the device's camera and AI capabilites. It will show up within Magnifier, which already hosts several accessibility features, including the People Detection feature launched in 2020. Door Detection will only work on the 2nd and 3rd gen 11-inch iPad Pro, 4th and 5th gen 12.9-inch iPad Pro models, as well as iPhone 12 Pro and 13 Pro devices.
An Apple executive who oversaw Apple's machine learning and artificial intelligence efforts has left the company in recent weeks, citing its stringent return-to-office policy, according to Bloomberg. Ian Goodfellow is now reportedly joining Google's DeepMind team as an individual contributor, a few years after he left the tech giant for Apple. Based on his LinkedIn profile, Goodfellow worked in different capacities for Google since 2013, including as a research scientist and as a software engineering intern. Bloomberg says the former Apple exec referenced the policy in a note about his departure addressed to staff members. In April, Apple announced that it was going to start implementing its return-to-office policy on May 23rd and will be requiring employees to work in its offices at least three times a week.
We've now tested every version of Apple's M1 processor, from the first M1 chip in the 13-inch Macbook Pro all the way up to the M1 Ultra in the new Mac Studio, and the chip's ability to scale performance is pretty incredible. The M1 Ultra fuses two M1 Max chips together to get you a processor with 20 CPU cores and 64 GPU cores, along with up to 128GB of RAM, and it's one of the fastest processors we've ever tested. We asked what tests you'd like to see run on the M1 Ultra and assembled quite a list, including Adobe Lightroom and Premiere Pro, Davinci Resolve and Fusion, 3D modeling in Blender, machine learning tests like TensorFlow and Pytorch, and even some gaming. Amazingly, the M1 Ultra really does seem to be around twice as fast as the M1 Max in most applications. Whatever overhead is required to shuffle data around such a large chip, it rarely impacts CPU performance.