Eliminate the middleware tier and directly communicate with back-end APIs for better security, lower cost, and greater speed. High volume web sites that offload scale to the frontend using techniques like leveraging edge caching with a partner content delivery network (CDN) see many benefits, including better performance and a much simpler, more resilient, and potentially cheaper infrastructure to maintain. But one of the main questions I get when talking about this philosophy with folks is: What about security? How do you securely handle things like authorization to APIs or prevent eavesdropping and altering of data transmission when your application mainly lives on the client-side? It's easy to think that a mostly client-side site can't be secure.
In this video from the Stanford HPC Conference, DK Panda from Ohio State University presents: Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems. "This talk will focus on challenges in designing HPC, Deep Learning, and HPC Cloud middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss the challenges in designing runtime environments for MPI X (PGAS-OpenSHMEM/UPC/CAF/UPC, OpenMP and Cuda) programming models by taking into account support for multi-core systems (KNL and OpenPower), high networks, GPGPUs (including GPUDirect RDMA) and energy awareness. Features and sample performance numbers from MVAPICH2 libraries will be presented. For the Deep Learning domain, we will focus on popular Deep Learning framewords (Caffe, CNTK, and TensorFlow) to extract performance and scalability with MVAPICH2-GDR MPI library and RDMA-enabled Big Data stacks.
An academic study published last month shows that despite years worth of research into the woeful state of network traffic inspection equipment, vendors are still having issues in shipping appliances that don't irrevocably break TLS encryption for the end user. Encrypted traffic inspection devices (also known as middleware), either special hardware or sophisticated software, have been used in enterprise networks for more than two decades. System administrators deploy such appliances to create a man-in-the-middle TLS proxy that can look inside HTTPS encrypted traffic, to scan for malware or phishing links or to comply with law enforcement or national security requirements. All such devices work in the same way, creating a TLS server on the internal network and a TLS client on the external network. The TLS server receives traffic from the user, it decrypts the connection, allows the appliance to inspect the traffic, and then re-encrypts and relays the connection to the intended server by mimicking the browser via its own TLS client.
The AAAI 2010 Workshop on Enabling Intelligence through Middleware (held during the Twenty-Fourth AAAI Conference on Artificial Intelligence) focused on the issues and opportunities inherent in the robotics middleware packages that we use. The workshop consisted of three invited speakers and six middleware research presenters. This report presents the highlights of that discussion and the packages presented.
Three years ago I wrote about how Red Hat was bringing its JBoss Java Enterprise Edition (JEE) middleware to the PaaS cloud. It took longer than I expected. But, the full Red Hat JBoss Middleware stack is now containerized and available on Red Hat's OpenShift Platform-as-a-Service (PaaS) cloud. JBoss makes up a complete family of open-source, lightweight development frameworks and servers. Red Hat has now moved it to 21st-century computing.