Collaborating Authors

Microsoft launches a drag-and-drop machine learning tool – TechCrunch


Microsoft today announced three new services that all aim to simplify the process of machine learning. These range from a new interface for a tool that completely automates the process of creating models, to a new no-code visual interface for building, training and deploying models, all the way to hosted Jupyter-style notebooks for advanced users. Getting started with machine learning is hard. Even to run the most basic of experiments takes a good amount of expertise. All of these new tools greatly simplify this process by hiding away the code or giving those who want to write their own code a pre-configured platform for doing so.

Amazon - Press Room - Press Release


The Gluon interface currently works with Apache MXNet and will support Microsoft Cognitive Toolkit (CNTK) in an upcoming release. With the Gluon interface, developers can build machine learning models using a simple Python API and a range of pre-built, optimized neural network components. This makes it easier for developers of all skill levels to build neural networks using simple, concise code, without sacrificing performance. AWS and Microsoft published Gluon's reference specification so other deep learning engines can be integrated with the interface.

Global Big Data Conference


Though it's still several years away from widespread deployment, 5G is a key component in the evolution of cloud-computing ecosystems toward more distributed environments. Between now and 2025, the networking industry will invest about $1 trillion worldwide on 5G, supporting rapid global adoption of mobile, edge, and embedded devices in practically every sphere of our lives. It will be a proving ground for next-generation artificial intelligence (AI), offering an environment within which data-driven algorithms will guide every cloud-centric process, device, and experience. Just as significant, AI will be a key component in ensuring that 5G networks are optimized from end to end, 24 7. AI will live at every edge in the hybrid clouds, multiclouds, and mesh networks of the future. Already, we see prominent AI platform vendors--such as NVIDIA--making significant investments in 5G-based services for mobility, Internet of Things (IoT) and other edge environments.

Elon Musk reiterates the need for brain-computer interfaces in the age of AI


How do you avoid getting made obsolete by artificial intelligence in a time when resources and research are largely being funnelled toward improving that area of tech? He's spoken about the potential of brain interfaces, including a "neural lace," before, but at the launch of Tesla in UAE during the World Government Summit in Dubai on Monday, Musk articulated more clearly why we might seek to deep our ties to our computing devices in the near future. Musk's comments recalled those made at Recode's Code Conference last year, in which he discussed a "neural lace" that would interface directly with the brain, letting users communicate thoughts with computers with much more bandwidth and much less latency than is currently possible via input mechanisms like keyboard and mouse. The need for this, he said on Monday in Dubai, could "achieve a symbiosis between human and machine intelligence, and maybe solves the control problem and the usefulness problem," reports CNBC. AI's potential for disruption lies not only in its ability to perform specific tasks more efficiently than its human creators, but also in how fast it can communicate with other networked devices – the speed advantage gives computers almost a trillion-fold speed edge when it comes to relaying their thoughts to other computer systems, vs. the pace at which people can convey and retrieve information via things like typed text or even voice queries.

Are you ashamed? Can a gaze tracker tell?


The user interfaces of the future cannot be imagined without an emotion-sensitive modality. More ways other than plain speech are used for communication. Spatial, temporal, visual and vocal cues are often forgotten in computer interfaces. Each cue relates to one or more forms of nonverbal communication that can be divided into chronemics, haptics, kinesics, oculesics, olfactics, paralanguage and proxemics (Tubbs, 2009), relating to certain activities of a human body, voice or gazing. Unfortunately, modern computer-based user interfaces do not take full advantage of nonverbal communicative abilities, often resulting in a much less than natural interaction.