Much of the discussion of the fourth industrial revolution relates to the disruptive impact of artificial intelligence, robotics, biotech, and big data on the world of work and business. It could lead to huge gains in productivity, wealth creation and human happiness. Equally, it may kill millions of jobs, fuel social tensions, and widen inequality. Civil society's place in this massive societal shake-out, reckons Andy Haldane, is relatively unexplored – but it will be profound. Haldane, the Bank of England's chief economist, is regarded as a "maverick" thinker among central bankers on account not only of his views on banking and financial regulation, but society more widely: from poverty ("scarcity of money reshapes your brain and reshapes your decision-making") to the importance of trade unions.
Fixed term contract until 1 March 2021 The Royal College of Art is the UK's only entirely postgraduate art and design university. In 2018/19 the College will have some 2,300 students registered for MA, MRes, MPhil and PhD degrees and over 450 permanent academic, technical and administrative staff, with more than 1,000 visiting lecturers and professors. The RCA Robotics Laboratory, recently established and directed by RCA's Academic Leader in Robotics, Dr Sina Sareh, develops new bioinspired technologies for robot mobility, manipulation and attachment in unstructured and extreme environments through funded projects by EPSRC, Innovate UK and industrial partners. Following the Royal College of Art's Strategic Plan 2016-2021, the lab is intended to create significant research and education capacity in robotics by 2020, to support the RCA's ambitious expansion plans in Battersea South including a new robotics facility and new research centres - the most radical transformation of the institution's campus in its 181-year history. Through the Innovate UK's "Robotics and AI: Inspect, Maintain and Repair in Extreme Environments" funding scheme, a research project grant entitled Multi-Platform Inspection, Maintenance & Repair in Extreme Environments (MIMRee) has been awarded to the RCA.
A new "demographic inference" tool developed by academics can make predictions based solely on the information in a person's social media profile (i.e. The tool--which works in 32 languages--could pave the way for views expressed on social media to be factored in to popular survey methods. Researchers at the University of Oxford, University of Michigan, University of Massachusetts, GESIS – Leibniz Institute for the Social Sciences, the Max Planck Institute, and Stanford University have developed a method to infer information about a social media account owner based on the information disclosed in their Twitter profile information. A new machine learning system --unveiled at the Web Conference in San Francisco this week--learned the patterns associated with different ages, genders, and between organisations and individuals from a data set of over four million Twitter accounts in 32 languages. This information was then combined with estimated locations and re-weighted against census data to produce more accurate estimates of population in 1,101 statistical regions across the EU.
A man who almost died from meningitis has revealed how he began to look forward to having his limbs amputated. Mike Davies, 60, from Brighton, spent 70 days in intensive care with meningococcal meningitis and septicaemia. During this time, he said he knew his hands and feet were "dead" and he would recover better without them. Now he says he is in a positive place and "can even hold a pint of beer". With the help of prosthetic limbs, Mr Davies can drive a specially-adapted car and said he was living life to the full.
Diverse applications - particularly in tumour subtyping - have demonstrated the importance of integrative clustering as a means to combine information from multiple high-dimensional omics datasets. Cluster-Of-Clusters Analysis (COCA) is a popular integrative clustering method that has been widely applied in the context of tumour subtyping. However, the properties of COCA have never been systematically explored, and the robustness of this approach to the inclusion of noisy datasets, or datasets that define conflicting clustering structures, is unclear. We rigorously benchmark COCA, and present Kernel Learning Integrative Clustering (KLIC) as an alternative strategy. KLIC frames the challenge of combining clustering structures as a multiple kernel learning problem, in which different datasets each provide a weighted contribution to the final clustering. This allows the contribution of noisy datasets to be down-weighted relative to more informative datasets. We show through extensive simulation studies that KLIC is more robust than COCA in a variety of situations. R code to run KLIC and COCA can be found at https://github.com/acabassi/klic
The number of recorded sexual offences involving online dating sites and apps has almost doubled in the last four years, police figures suggest. Offences where a dating site was mentioned in a police report increased from 156 in 2015, to 286 last year, according to figures from 23 of the 43 forces in England and Wales. The Online Dating Association said apps try to protect users from harm. But the National Police Chiefs' Council said firms had a duty to do more. The figures reveal that between 2015 and 2018 there were a total of 2,029 recorded offences - including sexual offences - where an online dating website or app was mentioned in a police report.
In its most basic form, decision-making can be viewed as a computational process that progressively eliminates alternatives, thereby reducing uncertainty. Such processes are generally costly, meaning that the amount of uncertainty that can be reduced is limited by the amount of available computational resources. Here, we introduce the notion of elementary computation based on a fundamental principle for probability transfers that reduce uncertainty. Elementary computations can be considered as the inverse of Pigou-Dalton transfers applied to probability distributions, closely related to the concepts of majorization, T-transforms, and generalized entropies that induce a preorder on the space of probability distributions. As a consequence we can define resource cost functions that are order-preserving and therefore monotonic with respect to the uncertainty reduction. This leads to a comprehensive notion of decision-making processes with limited resources. Along the way, we prove several new results on majorization theory, as well as on entropy and divergence measures.
In Computer Science Engineering, design patterns are implemented to architect a software design solution. Implementing industrial best practices for object-oriented programming and code implementation require design patterns at a higher-level as an abstract solution. Most of the functions in Python are first-class citizens. The biological cognitive models based on the way the complex computations work in the brain inspire the architecture for neural networks for machine learning and deep learning. The connectionism in the human brain is implemented through artificial neural networks and massively parallel distributed processing with a wide range of capabilities for the models of memory, attention span, semantic representation, language, inception formation, and cognitive reasoning.
The field of machine ethics is concerned with the question of how to embed ethical behaviors, or a means to determine ethical behaviors, into artificial intelligence (AI) systems. The goal is to produce artificial moral agents (AMAs) that are either implicitly ethical (designed to avoid unethical consequences) or explicitly ethical (designed to behave ethically). Van Wynsberghe and Robbins' (2018) paper Critiquing the Reasons for Making Artificial Moral Agents critically addresses the reasons offered by machine ethicists for pursuing AMA research; this paper, co-authored by machine ethicists and commentators, aims to contribute to the machine ethics conversation by responding to that critique. The reasons for developing AMAs discussed in van Wynsberghe and Robbins (2018) are: it is inevitable that they will be developed; the prevention of harm; the necessity for public trust; the prevention of immoral use; such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. In this paper, each co-author addresses those reasons in turn. In so doing, this paper demonstrates that the reasons critiqued are not shared by all co-authors; each machine ethicist has their own reasons for researching AMAs. But while we express a diverse range of views on each of the six reasons in van Wynsberghe and Robbins' critique, we nevertheless share the opinion that the scientific study of AMAs has considerable value.
The UK's National Health Service continues to suffer the longest funding squeeze since it was established 71 years ago. That financial pressure has resulted in the service missing targets for how soon cancer patients should be referred for treatment for the past three years and waiting times in Accident and Emergency departments being at record levels. Such is the financial and staffing pressure on the service, that talking about how recent advances in artificial intelligence (AI) could be applied to the NHS might seem fanciful. Yet Professor Tony Young, national clinical director for innovation at NHS England, believes healthcare is at an inflection point, where machine-learning technology could fuel huge advances in what's possible. "I think that healthcare is heading for one of those giant-leap moments in the next five to 10 years and AI is going to be a key tool in enabling us to take that giant leap," he said, speaking at an event in London organized by The King's Fund and IBM Watson Health.