The development of the internet over the last few decades has resulted in a massive increase in the production of data and the unprecedented availability of computing power for corporate applications. Machine Learning and artificial intelligence (AI) techniques have been fuelled by these revolutions to emerge from being purely academic topics of investigation to be the basis for a new wave of products and services for the digital age. The paradigm-shifting opportunities presented to corporates by this emerging technology range from the ability to expose and extract insights and patterns from data lakes to replacing human beings in critical decision-making scenarios. However, with these opportunities also come novel risks and concerns that must be considered when contemplating the development and deployment of AI and machine learning agents. These include understanding how their trustworthiness may be measured, the ethics and policies required for their deployment and the cybersecurity implications of their widespread adoption.
What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.
The issue of corporate ethics is never far from the business media headlines. Take the troubles embroiling former Nissan chair Carlos Ghosn, or the accounting problems at Patisserie Valerie in the UK, to name just two recent examples. Despite the best intentions and efforts of policymakers, legislators, boards and professional consultants, the corporate scandals keep coming. Now, to further complicate matters, the latest developments in the digital revolution are adding a new dimension to the challenge of ensuring companies and their executives behave responsibly. Ioannis Ioannou, Associate Professor of Strategy and Entrepreneurship at London Business School, and Sam Baker, Monitor Deloitte Partner, suggest that, while the widespread introduction of AI and machine learning technologies can be a force for good, without the right approach there is a risk that the corporate ethics waters become even murkier.
Human biases can become part of the technology people create, according to Nicos Savva, Associate Professor of Management Science and Operations at London Business School. A recent House of Lords Select Committee on Artificial Intelligence (AI) "AI in the UK: Ready, Willing and Able?" urged people using and developing AI to put ethics centre stage. The committee suggested a cross-sector AI Code, with five principles that could be applied globally including that artificial intelligence should "be developed for the common good and benefit of humanity" and should "operate on principles of intelligibility and fairness". The committee's chairman, Lord Clement-Jones, said in a statement: "The UK has a unique opportunity to shape AI positively for the public's benefit and to lead the international community in AI's ethical development, rather than passively accept its consequences." He added that "AI is not without its risks".
This report sets out a series of strategic recommendations to the government, based on core pillars including data supply and exchange, skills and education and developing an artificial intelligence infrastructure in the UK, with a view to growing the country's AI sector, something which was also augmented by the recent Budget and government's Industrial Strategy White Paper this week.
Technology and Legal Practice… How Disruptive Can It Possibly Be? New technology, capable of massively disrupting the legal profession, continues to be introduced at an ever-increasing rate. Legaltech, including chatbots, document automation and ground-breaking research tools, amongst others, raises fundamental existential questions about the legal profession. This evening event at Westminster Law School, University of Westminster, brings together three prominent experts in the fields of artificial intelligence, robotics and law for a conversation around current developments in these areas, followed by an opportunity for the audience to engage and ask questions. Chrissie Lightfoot is a prominent international legal figure, an entrepreneur, a legal futurist, legaltech investor, writer, international keynote speaker, legal and business commentator (quoted periodically in The Times and FT), solicitor (non-practising), Honorary Visiting Fellow at the University of Westminster School of Law, and author of best-seller The Naked Lawyer and Tomorrow s Naked Lawyer. She is CEO and founder of EntrepreneurLawyer Ltd and as the visionary and creator of Robot Lawyer LISA - the world's first impartial AI lawyer – is CEO and co-founder of AI Tech Support Ltd (trading as Robot Lawyer LISA).