InfiniteIO, the world's fastest metadata platform to reduce application latency, today announced the new Application Accelerator, which delivers dramatic performance improvements for critical applications by processing file metadata independently from on-premises storage or cloud systems. The new platform provides organizations across industries the lowest possible latency for their mission-critical applications, such as AI/machine learning, HPC and genomics, while minimizing disruption to IT teams. This press release features multimedia. "Bandwidth and I/O challenges have been largely overcome, yet reducing latency remains a significant barrier to improving application performance," said Henry Baltazar, vice president of research at 451 Research. "Metadata requests are a large part of file system latency, making up the vast majority of requests to a storage system or cloud. InfiniteIO's approach to abstracting metadata from file data offers IT managers a nondisruptive way to immediately accelerate application performance."
It used to be that every IT team could define and monitor clear network paths between their enterprise and data centre. They could control and regulate applications that ran on internal systems because they installed and hosted all of data locally without accessing the cloud. That level of control afforded greater visibility into issues like latency, allowing them to troubleshoot and resolve the problems quickly. Fast forward a decade and the proliferation of SaaS applications and cloud services has complicated network performance diagnostics to the point that it requires a rethink. What is the underlying cause of this trend?
Arm's interconnect technology on GF's 12LP process enables high performance and low latency, while increasing bandwidth for high core designs in AI, Cloud Computing and Mobile SoCs. GLOBALFOUNDRIES, the world's leading specialty foundry, today announced that it has taped-out an Arm-based 3D high-density test chip that will enable a new level of system performance and power efficiency for computing applications such as AI/ML and high-end consumer mobile and wireless solutions. The new chip was fabricated using GF's 12nm Leading-Performance (12LP) FinFET process and features Arm's mesh interconnect technology in 3D that allows data to take a more direct path to other cores, minimizing latency while increasing data transfer rates as demanded by data centers, edge computing and high-end consumer applications. The delivery of this chip demonstrates the fast progress that Arm and GF are making in researching and developing differentiated solutions that enable improvements in device density and performance for scalable high-performance computing. Moreover, the companies validated a 3D Design-for-Test (DFT) methodology, using GF's hybrid wafer-to-wafer bonding that can enable up to 1 million 3D connections per mm2, extending the ability to scale 12nm designs long into the future.
SAN FRANCISCO, September 17, 2019 -- Oracle Exadata Database Machine X8M, available today, sets a new bar and changes the dynamics of the database infrastructure market. Exadata X8M combines Intel Optane DC persistent memory and 100 gigabit remote direct memory access (RDMA) over Converged Ethernet (RoCE) to remove storage bottlenecks and dramatically increase performance for the most demanding workloads such as Online Transaction Processing (OLTP), analytics, IoT, fraud detection, and high frequency trading. "With Exadata X8M, we deliver in-memory performance with all the benefits of shared storage for both OLTP and analytics," said Juan Loaiza, executive vice president, mission-critical database technologies, Oracle. "Reducing response times by an order of magnitude using direct database access to shared persistent memory accelerates every OLTP application, and is a game changer for applications that need real-time access to large amounts of data such as fraud detection and personalized shopping." Exadata X8M helps customers perform existing tasks faster and accelerates time-to-insight, while also enabling deeper and more frequent analyses.