The three big cloud providers, specifically Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), want developers and data scientists to develop, test, and deploy machine learning models on their clouds. It's a lucrative endeavor for them because testing models often need a burst of infrastructure, and models in production often require high availability. These are lucrative services for the cloud providers and offer benefits to their customers, but they don't want to compete for your business only on infrastructure, service levels, and pricing. They focus on versatile on-ramps to make it easier for customers to use their machine learning capabilities. Each public cloud offers multiple data storage options, including serverless databases, data warehouses, data lakes, and NoSQL datastores, making it likely that you will develop models in proximity to where your data resides.
Artificial intelligence (AI) has been hugely transformative in industries with access to huge datasets and trained algorithms to analyze and interpret them. Probably the most obvious examples of this success can be found in consumer-facing internet businesses like Google, Amazon, Netflix, or Facebook. Over the last two decades, companies such as these have grown into some of the world's largest and most powerful corporations. In many ways, their growth can be put down to their exposure to the ever-growing volumes of data being churned out by our increasingly digitized society. But if AI is going to unlock the truly world-changing value that many believe it will – rather than simply making some very smart people in Silicon Valley very rich – then businesses in other industries have to consider different approaches.
Amazon Web Services on Monday said it's bringing a new set of EC2 instances into general availability, including Graviton2-based instances designed for GPU-based workloads. AWS highlighted a few workloads that G5g instances would serve well: For Android game streaming, the instances provide up to 30% lower cost per stream per hour than x86-based GPU instances, Amazon said. For ML inference, G5g instances are well-suited for models that are sensitive to CPU performance or leverage Nvidia's AI libraries. For graphics rendering, G5g instances are the most cost-effective option for AWS customers. The instances are compatible with a number of graphical and machine learning libraries on Linux, including NVENC, NVDEC, nvJPEG, OpenGL, Vulkan, CUDA, CuDNN, CuBLAS, and TensorRT.
Robots have moved off the assembly line and into warehouses, offices, hospitals, retail shops, and even our homes. ZDNet explores how the explosive growth in robotics is affecting specific industries, like healthcare and logistics, and the enterprise more broadly on issues like hiring and workplace safety. Amazon Web Services is expanding into robot fleet management with a cloud service based on Amazon's experience managing 350,000 robots in its fulfillment centers. The company at its re:Invent conference launched AWS IoT RoboRunner, a robotics service designed to enable enterprises to build and deploy applications so robots operate well together. AWS IoT RoboRunner is another part of the cloud provider's robotics stack.
If you started annotating today, your new year's resolutions will have comfortably been made and broken by the time you finish. And that's just for something novel, like training a system to pick out a lost child in a busy shopping mall. It takes even more images to help a delivery robot service safely navigate spaces where children are playing. The data scientists working on these systems can spend up to 80% of their time gathering, cleaning, and manually annotating real-world images to be digested by AI systems. It doesn't leave any time for network development or gleaning insights from the data.
The Center for Analysis has ordered a study aimed at developing "reliable technology for identifying personality traits." Authorities decided to use artificial intelligence to make a psychological diagnosis of a person based on data from social networks. Following the tender documents, the results of the competition will be announced on Monday, November 29, and the contractor must complete the work by September 2024. The use of digital clues (data that a user leaves behind during any activity on the Internet) provides "sufficient opportunities to evaluate a person and predict his or her behavior without psychological testing", which requires voluntary consent. One of the sources of such traces was attributed to customer social networks.
Tech company Promobot is on the lookout for a face for its humanoid robot assistant to work in hotels, shopping malls and other crowded places. The company is searching for a'kind and friendly' face to be reproduced on potentially thousands of versions of the robots worldwide. The company is ready to pay £150,000 ($200,000) to anybody willing to transfer the rights to their face and voice forever. 'Since 2019, we have been actively manufacturing and supplying humanoid robots to the market. Our new clients want to launch a large-scale project, and as for this, they need to license a new robot appearance to avoid legal delays,' said Promobot, which claims to be the largest service robotics manufacturer in Northern and Eastern Europe.
AI has, by now, proven its power and impact. The artificial intelligence space is constantly evolving and improving with every passing day. Tech companies and researchers are investing big in bringing out innovations due to the massive potential the impact of AI can hold on the world's biggest problems. As we head towards the end of 2021, let us look back at some of the major AI innovations and incidents that took centre stage this year. OpenAI released DALL·E, a 12-billion parameter version of GPT-3 trained to generate images from text descriptions, using a dataset of text-image pairs.
D2iQ recently released version 2.0 of the D2iQ Kubernetes Platform (DKP), a platform to help organizations run Kubernetes workloads at scale. The new release provides a single pane of glass for managing multi-cluster environments and running applications across any infrastructure including private cloud, public cloud, or at the network edge. DKP 2.0 is built on the Cluster API, a Kubernetes sub-project to simplify creating, configuring, and managing multiple clusters, to support Day 2 operations out of the box. Also, it has auto-scaling capabilities for workloads to improve availability and support for immutable operating systems such as Flatcar Linux. InfoQ sat with Tobi Knaup, CEO of D2iQ, at KubeCon CloudNativeCon NA 2021 and talked about DKP 2.0, its relevance to developers, and the future of Kubernetes.
Being in control of customer data is one of the ways retailers, like Amazon, Spotify and Netflix, are able to tap into consumer behavior and create customized experiences whenever a user logs in. Those are some of the reasons Amazon, in particular, is poised to grab 50% of the U.S. e-commerce market this year, and why Sydney-based Particular Audience wants to break down the data silos going on within e-commerce to give any retailer a chance to gather similar data on their customers to personalize experiences. Particular Audience provides product discovery tools for retailers that are powered by artificial intelligence and machine learning. In fact, the company wants to go further and offer personalization based on anonymity and without compromising personal data, CEO James Taylor told TechCrunch. Taylor launched Particular Audience in 2019 after taking a few years to work out the technology. The global pandemic threw a wrench in some plans, with Taylor and a handful of executives taking a pay cut so as to not have to let any employees go.