Why Partnership Strategy, not Technology, drives Digital Transformation? Known from the 17th century (Blaise Pascal invoked it in his famous wager, which is contained in his Pensées, published in 1670), the idea of expected value is that, when faced with a number of actions, each of which could give rise to more than one possible outcome with different probabilities, the rational procedure is to identify all possible outcomes, determine their values (positive or negative) and the probabilities that will result from each course of action, and multiply the two to give an "expected value", or the average expectation for an outcome; the action to be chosen should be the one that gives rise to the highest total expected value. Decision theory (or the theory of choice) is closely related to the field of game theory and is an interdisciplinary topic, studied by economists, statisticians, psychologists, biologists, political and other social scientists, philosophers, and computer scientists. The need for decision under uncertainty has never been stronger. Although the digital realm is evolving fast, the partnership strategical choice remains a human prerogative and a key driver of the digital ecosystem evolution.
Jan Buytaert is chief information officer at GO!, the public body for state schools in the Flanders region of Belgium. His role is to initiate new IT projects and prove their value to the business, with the hope that business decision makers and policymakers give them the green light. The projects can have huge implications for education in Belgium, as the region has around 750 schools and institutions, and 210,000 students. "There wasn't always a lot of digital innovation so I had to work hard trying to convince management and policymakers that we should invest in tech and digital education, and change the way of teaching and learning," Buytaert tells NS Tech. In 2016, Buytaert and his team analysed the way teaching was carried out in several schools, working alongside teachers, students and principals.
The UK government has developed a voracious appetite for artificial intelligence (AI), based on a promise of its apparently transformative power across myriad industries. From prime minister Boris Johnson's pledge to fund a £250m AI lab for the NHS, to the Department for Education's recently launched'AI horizon scanning group', AI is being lauded as a panacea to some of the most pressing issues society faces. Education is just one of the sectors that is meeting AI with open arms. As Matthew Jones at Perlego argued for this title, the opportunities being presented for AI to close educational accessibility gaps is exciting. In fact, educators, policymakers and investors are all being bombarded with messages related to AI's seemingly endless benefits in the classroom.
Today microcontrollers can be found in almost any technical device, from washing machines to blood pressure meters and wearables. Researchers at the Fraunhofer Institute for Microelectronic Circuits and Systems IMS have developed AIfES, an artificial intelligence (AI) concept for microcontrollers and sensors that contains a completely configurable artificial neural network. AIfES is a platform-independent machine learning library which can be used to realize self-learning microelectronics requiring no connection to a cloud or to high-performance computers. The sensor-related AI system recognizes handwriting and gestures, enabling for example gesture control of input when the library is running on a wearable. A wide variety of software solutions currently exist for machine learning, but as a rule they are only available for the PC and are based on the programming language Python.
After his tenure as chief scientist at Baidu, Andrew Ng, the founder of the Google Brain project and former CEO of Coursera, set up a number of different projects that all focus on making AI more approachable. These include the education startup Deeplearning.ai, Today, Ng announced he has opened a second office for these projects in Medellin, Colombia. At first, Medellin may seem like an odd choice. But today's Medellin is very different from the one you may have seen on Narcos (and a lot safer).
Machine learning has been around for quite some time and we see or use it knowingly or unknowingly in our daily lives. The best example comes the moment when we open our emails – THE SPAM FILTER! It saves you a lot of time by automatically keeping the most important emails in your inbox and moving the suspicious ones to your spam folder. Let's look at how machine learning is defined, how it helps our everyday processes, and the different types of machine learning. Machine learning is a method of data analysis that automates analytical model building.
The two biggest barriers to the use of machine learning (both classical machine learning and deep learning) are skills and computing resources. You can solve the second problem by throwing money at it, either for the purchase of accelerated hardware (such as computers with high-end GPUs) or for the rental of compute resources in the cloud (such as instances with attached GPUs, TPUs, and FPGAs). On the other hand, solving the skills problem is harder. Data scientists often command hefty salaries and may still be hard to recruit. Google was able to train many of its employees on its own TensorFlow framework, but most companies barely have people skilled enough to build machine learning and deep learning models themselves, much less teach others how.
Intel revealed the broad outlines of its new Nervana Neural Network Processor for Inference, of NNP-I for short, that comes as a modified 10nm Ice Lake processor that will ride on a PCB that slots into an M.2 port (yes, an M.2 port that is normally used for storage), at an event in Haifa, Israel two months ago. Today, the company provided further deep-dive details of the design here at Hot Chips 31, the premier venue for leading semiconductor vendors to detail their latest microarchitectures. Intel is working on several different initiatives to increase its presence in the booming AI market with its'AI everywhere' strategy. The company's broad approach includes GPUs, FPGAs, and custom ASICs to all tackle different challenges in the AI space, with some solutions designed for compute-intensive training tasks that create complex neural networks for object recognition, speech translation, and voice synthesis workloads, to name a few, and separate solutions for running the resulting trained models as lightweight code in a process called inference. Intel's Spring Hill Nervana Neural Network Processor for inference (NNP-I) 1000, which we'll refer to as the NNP-I, tackles those lightweight inference workloads in the data center.
A new robot project has been published to the Instructables Circuits website which is equipped with machine learning technology allowing it to see the world using a generic camera to perform tasks depending on the detected object's position and orientation. Check out the video below to learn more about the Raspberry Pi powered robot which is equipped with a 3D printed claw. "This robot is truly special because it can use Machine Learning models to'see' the world via a generic camera and perform tasks depending on how the detected object's position is changing in the camera. This robot is built around the ever popular Raspberry pi, the incredibly powerful RoboClaw motor controller, and the common Rover 5 robot platform. Furthermore, all the additional physical parts are 3D printed.
Google says it has made it possible for a smartphone to interpret and "read aloud" sign language. The tech firm has not made an app of its own but has published algorithms which it hopes developers will use to make their own apps. Until now, this type of software has only worked on PCs. Campaigners from the hearing-impaired community have welcomed the move, but say the tech might struggle to fully grasp some conversations. In an AI blog, Google research engineers Valentin Bazarevsky and Fan Zhang said the intention of the freely published technology was to serve as "the basis for sign language understanding".