Video is the world's largest generator of data, created every day by over 500 million cameras worldwide. That number is slated to double by 2020. The potential there, if we could actually analyze the data, is off the charts. It's data from government property and public transit, commercial buildings, roadways, traffic stops, retail locations, and more. The result would be what NVIDIA calls AI Cities, a thinking robot, with billions of eyes trained on residents and programmed to help keep people safe.
I don't know about you, but I was not the most athletic kid growing up. It took me forever to make a jump shot well. When I started playing golf after college my short game was an absolute disaster. I always had a hard time visualising what I needed to do differently. Having a coach tell me what to do never seemed to do the trick.
The latest update to Microsoft's flagship Windows 10 computer software has been made available to consumers for the first time. The update has various new features including a native app called Paint 3D, an image and video editing app called Story Remix and a feature called Timeline that enables users to pick up on tasks they were previously working on. The'Fluent Design' redesign is part of an effort to modernise the operating system and will be rolled out from today. Users are recommended to back up their computer before downloading an update and Microsoft will send an alert when it is available. Microsoft first revealed the new feature during the second day of its Build developer conference in Seattle in May.
Pedro Galveia, Yard and Ship Planner and Port Consultant at Yilport Sotagus in Lisbon explained roughly how automation will break down among the differently sized terminals in a new video. Observations can be made regarding three main areas of terminal automation: the Ship-to-Shore (STS) cranes, the stacking area and the gates. Galveia explains in the video: "For bigger terminals above 4-5 million moves per year, global terminal operators behind them will put pressure to automate the STS cranes. "Terminals with a half-million moves will search for process automation and clean flows in their STS cranes. Read a related paper by Automated Terminal Systems about how efficient handling for mega-ships requires some level of automation.
How do people assign a cause to events they witness? Some philosophers have suggested that people determine responsibility for a particular outcome by imagining what would have happened if a suspected cause had not intervened. This kind of reasoning, known as counterfactual simulation, is believed to occur in many situations. For example, soccer referees deciding whether a player should be credited with an "own goal" -- a goal accidentally scored for the opposing team -- must try to determine what would have happened had the player not touched the ball. This process can be conscious, as in the soccer example, or unconscious, so that we are not even aware we are doing it.
Nobody likes it when their binge watching is disrupted by a buffering video. While streaming sites like Netflix have offered workarounds for connectivity problems (including offline viewing and quality controls), researchers are tackling the issue head on. In August, a team from MIT CSAIL unveiled its solution: A neural network that can pick the ideal algorithms to ensure a smooth stream at the best possible quality. But, they're not alone in their quest to banish video stutters. The folks at France's EPFL university are also tapping into machine learning as part of their own method.
It's impossible to predict whether Google's brand-new Pixel 2 and Pixel 2 XL smartphones will fare better than last year's well-reviewed but poor selling first-generation models. Among other reasons, the smartphone crowd loves their iPhones and Galaxys, and Apple and Samsung obviously remain formidable competitors. What I can say is that the new phones prove how good Google has gotten at hardware, bolstered by artificial intelligence and software. And if you're in the market for a premium handset, Pixels belong in the conversation. For starters, the AI-infused Google Assistant that was a banner feature on the first Pixels is only getting smarter.
The Echo Show is not just Amazon's best smart speaker, it's the most capable mainstream smart home assistant on the market. An Intel Atom x5-Z8350 processor and a 7-inch color touchscreen pumps its price tag up to $230, but the display is worth the added cost to have at least one in a smart home with other Echo speakers. And the Show's eight-element far-field mic array is stronger than the ones on Amazon's other Echos, which for me eliminated the need to have an Echo Dot in an adjoining room. Amazon takes full advantage of that display, providing not just useful visual feedback, but also an in-home intercom--with video, if two Echo Shows are used--and a VoIP-type videophone system. I'll elaborate on the intercom feature shortly.
AI has become a hot topic among tech corporations, startups, investors, the media, and the public. That's only because machine learning platforms have already been doing hard work for years now. Last month, NVIDIA announced the addition of Huawei and Alibaba as adopters of its system "Metropolis", an AI-platform for smart cities. More than 50 organizations are already using Metropolis and, by 2020, according to NVIDIA, there will be 1 billion video cameras worldwide that could be connected to AI platforms to make cities smarter. When connected to AI, cameras can be used to recognize shapes, faces and even the emotions of individuals, which has varied applications: autonomous cars, video surveillance (traffic flow, crime monitoring), and consumer behavior analysis (reaction to ads for example).
She consults an app on her phone, which asks an increasingly sophisticated series of diagnostic questions. The app also takes in data from Janet's fitness trackers that monitor heart rate, blood pressure and blood sugar. The app decides that Janet's symptoms look serious, and it arranges a video chat with a human doctor to discuss options so that potentially bad news can be presented in a more "human" way. The doctor has access to Janet's data remotely, along with access to a more sophisticated diagnostic, Artificial Intelligence. During that consultation, Janet is booked into a clinic for medical imaging scans to aid in further diagnosis.