The future impacts of artificial intelligence (AI) on society and the labour force have been studied and reported extensively. In a recent book, AI Superpowers, Kai-Fu Lee, former president of Google China, wrote that 40 to 50 per cent of current jobs will be technically and economically viable with AI and automation over the next 15 years. Artificial intelligence refers to computer systems that collect, interpret and learn from external data to achieve specific goals and tasks. Unlike natural intelligence displayed by humans and animals, it is an artificial form of intelligence demonstrated by machines. This has raised questions about the ethics of AI decision-making and impacts of AI in the workplace.
Like medicine, psychology, or education, data science is fundamentally an applied discipline, with most students who receive advanced degrees in the field going on to work on practical problems. Unlike these disciplines, however, data science education remains heavily focused on theory and methods, and practical coursework typically revolves around cleaned or simplified data sets that have little analog in professional applications. We believe that the environment in which new data scientists are trained should more accurately reflect that in which they will eventually practice and propose here a data science master's degree program that takes inspiration from the residency model used in medicine. Students in the suggested program would spend three years working on a practical problem with an industry, government, or nonprofit partner, supplemented with coursework in data science methods and theory. We also discuss how this program can also be implemented in shorter formats to augment existing professional masters programs in different disciplines. This approach to learning by doing is designed to fill gaps in our current approach to data science education and ensure that students develop the skills they need to practice data science in a professional context and under the many constraints imposed by that context.
A vice-chancellor's call for universities to train undergraduates "to tell the machines what to do" has rekindled debate about how higher education institutions can best prepare their students for the jobs of the future. Michael Spence outlined plans for the University of Sydney to move towards offering four-year degrees with a greater focus on problem-solving and cultural competency as sector leaders around the world debate whether the rise of artificial intelligence and automation will require providers to prioritise specialist skills in areas such as coding, or broad knowledge that will allow graduates to adapt to a changing workplace. The shift towards longer degrees also runs counter to the push in the UK for more two-year degrees, designed to allow students to start their career more quickly and more cheaply. In an interview with Times Higher Education, Dr Spence outlined how Sydney had streamlined its 122 degree programmes – a portfolio based on the supposition that "if you enter a narrow tube that has a job name at one end, at the other end you'll plop out into the job" – to just 25. The rise of AI means that such jobs "may not exist by the time you end up there, or at least won't necessarily have any longevity", Dr Spence said.
Machines are eating humans' jobs talents. And it's not just about jobs that are repetitive and low-skill. Automation, robotics, algorithms and artificial intelligence (AI) in recent times have shown they can do equal or sometimes even better work than humans who are dermatologists, insurance claims adjusters, lawyers, seismic testers in oil fields, sports journalists and financial reporters, crew members on guided-missile destroyers, hiring managers, psychological testers, retail salespeople, and border patrol agents. Moreover, there is growing anxiety that technology developments on the near horizon will crush the jobs of the millions who drive cars and trucks, analyze medical tests and data, perform middle management chores, dispense medicine, trade stocks and evaluate markets, fight on battlefields, perform government functions, and even replace those who program software – that is, the creators of algorithms. People will create the jobs of the future, not simply train for them, ...