Dr. Eli David is a leading AI expert specializing in deep learning and evolutionary computation. He is the co-founder of DeepCube. Over the last several years, deep learning has proved to be the key driver of AI advancement and improvements. Drawing from how the human brain operates, deep learning is responsible for advancing AI applications from computer vision to speech recognition to text and data analysis. Deep learning models are trained in research labs using large amounts of training data to demonstrate how the technology could manifest in real-world deployments.
I had an interesting talk with AJ Abdallat, CEO of a small firm called Beyond Limits doing interesting things with AI. Their differentiator is that their AI's decisions can be audited, and the AI itself can be edited at a granular level so corrections generally don't require retraining. As I was listening it struck me that if we could do this with people, particularly young teenagers, top executives, criminals and politicians we could almost instantly make the world a better safer place. Granted this approach – particularly if it was being used for commercial aircraft or self-driving cars – should have a high requirement for substantial simulation before deployment. This could not only cut years off what would typically be needed for a complex AI development project, however, but would also allow for a level of customization at scale we don't currently seem to have in this space.
You may have heard the concept of "DevOps" thrown around in the software community. For the uninitiated, it's essentially the merging of development and operations teams into one synergistic whole that can rapidly produce applications and services. As such, those versed in DevOps' practices and tools are in high demand, and with the Complete DevOps & Deployment Technologies Bundle, you can join their ranks for over 80 percent off.
Here's the thing about machine learning: use the right datasets and it'll help you root out malware with great accuracy and efficiency. But the models are what they eat. Feed them a diet of questionable, biased data and it'll produce garbage. That's the message Sophos data scientist Hillary Sanders delivered at Black Hat USA 2017 on Wednesday in a talk called "Garbage in, Garbage Out: How Purportedly Great Machine Learning Models Can Be Screwed Up By Bad Data". A lot of security experts tout machine learning as the next step in anti-malware technology.