If nothing else, AI continues to climb the technology hype curve. It was impossible to read the news, browse the web, attend a conference, or even watch television without seeing a reference to how AI is making our lives better. Since Alan Turing declared "what we want is a machine that can learn from experience" in a 1947 lecture to the London Mathematical Society, the imaginations of computer scientists and engineers have run wild with visions of a computer that can answer questions on par with a human. Today, almost everyone in business is looking at how to leverage AI, and there is no shortage of vendors looking to capitalize on the trend. Venture Scanner currently tracks more than 2,000 AI startups that have received more than $26 billion in funding.
When Google abandoned the Chinese search market over government censorship in 2010, it seemed a remarkably principled act of self-sabotage. The company's decision to return to China today, by establishing a new AI research center in Beijing, is all about safeguarding its future. The center was announced at an event in Shanghai today by Fei-Fei Li, a prominent AI researcher and the chief scientist at Google Cloud. With the announcement, Google is acknowledging the growing importance of China for the future of AI. It is also setting the stage for a battle over who gets to deliver AI to the rest of the world.
Another shift in the technology landscape appears to be underway which has the potential to dramatically alter the way data is created and processed. Edge computing is, in essence, tied to the evolution of the internet of things (IoT). As various industries push to connect previously dumb objects to the internet, the way in which these objects talk to one another will change. For some uses, low latency is really crucial – think of a connected car needing to decided to avoid an object in the road – and so computing will need to take place at the outer reaches, or the'edge' of the network, nearer the objects themselves. An edge device could be anything that provides an entry point to a network, for example, routers, WANs and switches.
The days of Amazon Web Services as an infrastructure provider are over as the company--and its customers--are going server less and moving up the stack to be a machine learning, data management and artificial intelligence platform. At re:Invent, AWS CEO Andy Jassy was pitching his company to data scientists as much as IT pros deploying Internet of things tools and various databases. AWS CTO Werner Vogels was outlining his vision of computing in the future and it all revolved around data. "The quality of the data you have will be the differentiator," said Vogels. "Data will have a crucial impact on how companies change behavior and build new systems."
Cloud computing and the Internet of Things (IoT) have spent the last several years in a sort of maximum-acceleration race where they've lapped the other players several times over and have only one another to measure against. Neither is slowing down, particularly the IoT. According to analysis firm Gartner, the number of IoT devices will hit 20.8 billion by 2020. The world population is expected to reach 8 billion in 2020, meaning there will be 2.5 IoT devices per person on the entire planet. In 2016, the IoT was growing at the rate of 5.5 million new things getting connected every day.
The 22nd International Cloud Expo 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. CloudExpo DXWorldEXPO have announced the conference tracks for Cloud Expo 2018, introducing DXWorldEXPO. DXWordEXPO, colocated with Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DXWorld Expo within the program.
There is a strange and uneasy tension standing at the base of a wind turbine, amid a power generation farm full of dozens more. The air can seem still even though you can clearly see, and hear, the turbines moving. Indeed, the sound never dies down, although you're standing in precisely the space where you would most expect it to. With all these rotating blades the size of softball fields, it indeed feels and sounds like a place you'd expect to find something called "the edge." There's no methodology for any of the world's power grids to distinguish renewable power, such as wind-generated, from coal-based or hydroelectric power.
Someplace in America, a safe distance of several feet from the shoulders of a major highway, drone cameras are flying in parallel with a fleet of tanker trucks. The next time you encounter a fuel truck or a milk truck, take a moment to see if there are any buzzing camera drones in its vicinity. An introduction to cloud computing from IaaS and PaaS to hybrid, public and private cloud. They've been commissioned to take thermal pictures of these trucks, and transmit them live, in real-time, to a server. If one of these trucks is leaking or isn't well-sealed, or if its contents are imbalanced and unsafe, an operations center is notified, and the driver may be told to pull over.
You started off in e-commerce. You're now directly or indirectly in cloud computing, media and entertainment, logistics, payments and others as well. Your vision is that customers will meet, work and live at Alibaba, which is pretty much everything. Where does the ambition end, and what's the unifying vision? TSAI: Since 1999, we started the company with a mission to make it easy to do business anywhere.
Developers, data scientists, and researchers are solving today's complex challenges with breakthroughs in artificial intelligence, deep learning, and high performance computing (HPC). NVIDIA is working with Amazon Web Services to offer the newest and most powerful GPU-accelerated cloud service based on the latest NVIDIA Volta architecture: Amazon EC2 P3 instance. Using up to eight NVIDIA Tesla V100 GPUs, you will be able to train your neural networks with massive data sets using any of the major deep learning frameworks faster than ever before. Then use the capabilities of GPU parallel computing, running billions of computations, to infer and identify known patterns or objects. With over 500 GPU-accelerated HPC applications accelerated, including the top ten HPC applications and every deep learning framework, you can quickly tap into the power of the Tesla V100 GPUs on AWS to boost performance, scale-out, accelerate time to results, and save money.