How Walmart is going all in on artificial intelligence


As the digital era continues to turn retail on its head, Walmart's story is particularly interesting, as the brand is somehow both the challenger and the incumbent. Walmart has been the world's largest retailer since 1988, but as Sears proved, prominence isn't permanent. The retailer had a series of disappointing quarters, seemingly on track to become Amazon's biggest casualty. Galagher Jeff, the company's VP of Merchandising Operations and Business Analytics, even said so when he spoke at NRF 2019: Retail's Big Show in New York City earlier this week. "We had a business that was successful and we stopped taking risks," admits Jeff.

Don't Get Left Behind – Transform Your Data Protection With Machine Learning and AI


If you ventured in the North Hall of the recent Consumer Electronic Show (CES) in Las Vegas, you would be mistaken for stepping into a car show. AI was literally everywhere with innovators showcasing how the technology would make everyone's life easier and how it would give us back that most valuable gift – time. That's something that I and every commuter can appreciate with the Auto Insurance Center estimating that the average commuter spends 42 hours a week (a full work week!) in traffic. This is why artificial intelligence and machine learning technologies are viewed so strategically, not only in our daily lives but also in business. When our cars transform from a tool you use to get to a destination to an AI-driven service that delivers you to your destination (self-driving car is your digital chauffeur assistant), you unlock the ability to refocus your time and energy potentially on more high value needs while letting the intelligence in a connected and AI-driven car manage the mundane tasks.

Why Is AI Implementation Becoming a Must for Automotive Industry?


Advancements in artificial intelligence continue to develop on industries like aviation, manufacturing and technology, and others. This is because, the offerings of AI, machine learning and deep learning can help companies to become more efficient. But one industry which is witnessing dramatic change is the automotive sector. AI is revolutionizing this industry and has entirely new ways for people to get around and will also impact the way traffic will be maintained in the cities. The attempts to create driverless cars are gaining promising with the availability of advanced technologies, notably AI.

Most CEOs Don't Know Where to Deploy AI Within their Business - InformationWeek


Like many organizations, your company may be pouring resources into the development of artificial intelligence. Investment in the space is skyrocketing. IDC predicts that by the end of 2018, companies will have spent 54% more on AI than they did in 2017. However, for many, that investment has yet to pay off. A recent survey from MIT Sloan Management Review and Boston Consulting Group found that less than 5% of organizations have incorporated AI extensively into their processes.

CES 2019: AI startup Eyeris will know your car from the inside-out


The past several years at the Consumer Electronics Show, six-year-old AI startup Eyeris has had a hotel suite to explain how the car of the future will work. This year, the Palo Alto-based company rolled out a Tesla Model S in front of the Las Vegas Westgate, emblazoned with the company logo, to take visitors inside the future of car awareness, if you will. "Today, it's just following your eyes," says Modar Alaoui, founder and CEO of the startup, explaining from the back seat how today's "driver monitoring system," or "DMS," functions. Cameras in the dash are observing eye patterns to detect if you're drowsy, so they can prompt you to pay attention. He tilts his head back in his seat to reveal the weak spot.

Artificial intelligence can't save us from human stupidity


Looking over the year that has passed, it is a nice question whether human stupidity or artificial intelligence has done more to shape events. Perhaps it is the convergence of the two that we really need to fear. Artificial intelligence is a term whose meaning constantly recedes. Computers, it turns out, can do things that only the cleverest humans once could. But at the same time they fail at tasks that even the stupidest humans accomplish without conscious difficulty.

The Future of Ethics might be hanging on that #AI training dataset


With algorithms playing an increasingly more important role in business transactions, from online retail to innovative brick-and-mortar; from structuring dispersed - and often not standardized - electronic health records, to diagnosing patients and connecting them with the right specialist; from autonomous vehicles deciding between saving the life of a passenger on-board or a pedestrian on a road side, many are warming up to the idea of an AI regulatory framework, which will never happen soon enough. But as the framework is far from being ready, companies should embrace an AI based not only on possibilities - what we can do - but also on ethical implication - what we should do not pursue. The importance is underscored by two examples, that made it to mainstream media: Amazon scrapping its HR-related AI project because it showed recruiting bias, and Equivant / Northpointe which had to kill their machine-learning for parole recommendation, because of wrong - biased - recommendations on prisoners. The risks should not underestimated. In an article on the MIT Sloan Review of August 2018, Davenport and Foutty identify seven attributes of AI-driven Leaders, or as I prefer to call them, of leaders in the era of AI.

The information age is over, welcome to the machine learning age


I first used a computer to do real work in 1985. I was in college in the Twin Cities, and I remember using the DOS version of Word and later upgrading to the first version of Windows. People used to scoff at the massive gray machines in the computer lab, but secretly they suspected something was happening. You could say the information age started in 1965 when Gordon Moore invented Moore's Law (a prediction about how transistors would double every year, later changed to every 18 months). It was all about computing power escalation, and he was right about the coming revolution.

What Tech Will Look Like in 2039


For the first issue of the PCMag Digital Edition in 2019, we're fast-forwarding to envision what technology--and our tech-driven society--will look like in 2039. We wanted to explore the myriad ways in which tech will be more intertwined with our lives and will have changed our culture. To do so, we interviewed a select group of futurists, execs, academics, researchers, and a speculative fiction writer, who gave us some thoughtful predictions. Each of our interviewees has a unique perspective on the most important factors that will influence our tech-driven future, including artificial intelligence, automation, biotechnology, nanotechnology, autonomous vehicles, Internet of Things devices, smart cities, and much more. They also speculate how broader issues such as climate change and online privacy and security will affect us and the technology with which we'll be living. It's our best educated guess at predicting what our world and technology's role in it will look like--whether our lives will be dystopian, utopian, or somewhere in that vast gray area in the middle. Jason Silva is host of the Emmy-nominated series Brain Games on National Geographic. He also created and hosts the YouTube series "Shots of Awe." The ebullient Venezuelan-born documentary filmmaker, speaker, and TV personality--who was once described by The Atlantic as "a Timothy Leary of the viral video age"--is a techno-optimist whose ideas are influenced by (among others) fellow futurist Ray Kurzweil, Wired founding editor Kevin Kelly and his concept of the Technium. In the next 20 years, we're going to see exponential progress in some of these nascent technologies, like virtual reality and augmented reality. I think the next thing to dematerialize is the smartphone itself. What that looks like, who knows? Maybe it's a pair of eyeglasses we put on that are connected to some kind of computational device, and it will beam an augmented reality interface that fully overlays, that is contextually aware, and enhances the way we interface with the world--so that essentially, each one of us has that kind of personalized experience of reality.

No driver, no problem - Mediaan


In conclusion, in order to explain how autonomous cars use machine learning we need to reverse engineer the human approach to driving and reproduce this. This is summed up in three general steps: observe environment, make a decision based on what you register, the rules of the game, and your own experience, and lastly execute an action. Instead of seeing and hearing a digital system can make use of radar technologies, GPS, motion sensors, laser light, or computer vision (like in Facebook's face recognition software). For example: how many "objects" are in the vicinity of the car and what is their position? Or what are the weather conditions?