Launched in November 2015, the Alan Turing Institute is the national institute for data science and artificial intelligence. Our mission is to make great leaps in research to change the world for the better. The Institute is headquartered at The British Library, and brings together researchers from a range of disciplines – mathematics, statistics, computer science, engineering and social sciences, – from thirteen leading universities and industry partners. The permanent research staff of the institute's Research Engineering Group work to realise cutting edge research as professionally usable software tools and to apply these to address real-world data science and modelling challenges. The group's staff are research software engineers and data scientists.
The Transhuman House, ZS and more… what is in store for 2019? The past year has had a lot of ups and downs. From the success of the AI research at the AGI Laboratory, or the opening and building out of the Transhuman House 2.0, to the Foundation Retreat, a lot has happened this last year. It will be an interesting ride to see what happens this coming year. I hope one theme that I've embraced in life the past year will follow me through the next and that is the archetype "the Architect" and how that will apply to a cohesive plan for the year.
For most businesses data is the foundation upon which they wish to build better customer experiences, deliver new innovative products or improve operational efficiencies. However, for many showing the return on their data investments remains illusive. Arecent McKinsey reportindicates only some 30% of proposed benefits were achieved over the past 5-6 years. In February Hortonworks commissioned Forrester Research to explore challenges associated with Big Data adoption and understand the current trends shaping architectural choices. According to the study 3 out of 4 decision makers surveyed are expanding their use of Big Data in areas of automated decisioning (eg fraud detection), real-time analytics and also new innovative products.
Historically, research ethics committees (RECs) have been guided by ethical principles regarding human experimentation intended to protect participants from physical harms and to provide assurance as to their interests and welfare. But research that analyzes large aggregate data sets, possibly including detailed clinical and genomic information of individuals, may require different assessment. At the same time, growth in international data-sharing collaborations adds stress to a system already under fire for subjecting multisite research to replicate ethics reviews, which can inhibit research without improving the quality of human subjects' protections (1, 2). "Top-down" national regulatory approaches exist for ethics review across multiple sites in domestic research projects [e.g., United States (3, 4), Canada (5), United Kingdom, (6), Australia (7)], but their applicability for data-intensive international research has not been considered. Stakeholders around the world have thus been developing "bottom-up" solutions.
It is estimated that AI-enabled tools alone will generate $2.9 trillion in business value by 2021. The stats speak for themselves. AI clearly follows the motto "go big or go home". This explosive growth of AI in different sectors of technology is also beginning to show its colors in software development. Shawn Drost, co-founder and lead instructor of coding boot camp'Hack Reactor' says that AI still has a long way to go and is only impacting the workflow of a small portion of software engineers on a minority of projects right now.