Apple has lifted the lid on its elite engineers working in the UK – and introduced them to classes full of children. The company has thrown its support behind the government's Year of Engineering, which intends to encourage more young people to pursue careers in the sector. It is still suffering from a huge problem with diversity, and is still vastly short of the number of engineers that it intends to hire. As par of that project, Apple is bringing students to meet with several of its important teams, including those that design the silicon that powers today's iPhones and the Siri voice assistant that lives on it. They will be talking to children about what is needed to be an engineer, and work through interactive sessions on their own.
Artificial Intelligence is now capable of performing tasks that have historically required human ingenuity. But these advances also put AI on a collision course with numerous aspects of p... The UK government has launched a report calling for the creation of an artificial intelligence code of ethics. Announced at a recent TEDtalk, Talk to Books will cite relevant information from the world's books. Submissive female robots, servile voice assistants - does AI need a feminist revolution?
The UK is in a strong position to be a world leader in the development of artificial intelligence (AI) – but an ethical approach to doing so will be central to future success, says a new House of Lords report. Machine-learning, the precursor to AI, is already widely found across retail, in use in tools from chatbots to recommendations. Retailers from Shop Direct to the Yoox Net-A-Porter Group and Ocado are investing heavily in the relevant technologies which are set to lead to true AI in time. Machine learning automates human decision making, enabling it to happen at scale. Retailers will move further towards true AI when they develop machines that can make original decisions, reaching altogether new conclusions.
Recent research from Britain confirms what many people in enterprises already know, artificial intelligence (AI) is going to profoundly change the workplace. However, unlike previous research, the findings contained in the AI in the UK: Ready, willing and able? This workplace will be one where AI enhances and creates many new jobs, but also one where retraining is not just an occasional event, but something that is ongoing and continuous as AI changes the way we work. The report has a very specific focus through. It outlines what the authors believe are the opportunities for the United Kingdom in an AI-driven world and what the UK government needs to do to turn the workplace change to the advantage of its citizens.
Millie Bobby Brown has been named as one of the world's 100 most influential people by Time magazine. The 14-year-old Stranger Things actress joins Prince Harry, Meghan Markle and rapper Cardi B on the 2018 list. She is the youngest person to be included in Time's top 100, which is published every year. Brown rose to fame with her role as the character Eleven in the hugely popular science fiction TV show. She starred in the first series when she was 12.
Artificial intelligence (AI) should be subject to a cross-sector code of practice that ensures the technology is developed ethically and does not diminish the rights and opportunities of humans, according to a new report by the House of Lords. In the comprehensive report, released this morning, the House of Lords Select Committee said the UK is in a "unique position" to help shape the development of AI on the world stage, ensuring the technology is only applied for the benefit of mankind. "The UK has a unique opportunity to shape AI positively for the public's benefit and to lead the international community in AI's ethical development, rather than passively accept its consequences," said Committee chairman Lord Clement-Jones. "The UK contains leading AI companies, a dynamic academic research culture, and a vigorous startup ecosystem as well as a host of legal, ethical, financial and linguistic strengths. We should make the most of this environment, but it is essential that ethics take centre stage in AI's development and use," added Clement-Jones.
Brittney Kaiser, a former employee for Cambridge Analytica -- who left the company in January and is today giving evidence in front of a UK parliament committee that's investigating online misinformation -- has suggested that data on far more Facebook users may have found its way into the consultancy's hands than the up to 87M people Facebook has so far suggested had personal data compromised as a result of a personality quiz app running on its platform which was developed by an academic working with CA. Another former CA employee, Chris Wylie, previously told the committee the company worked with professor Aleksandr Kogan to gather Facebook users' data -- via his thisisyourdigitallife quiz app -- because Kogan had agreed to work on gathering and processing the data first, instead of negotiating commercial terms up front. CA's intent was to use Facebookers' data for political microtargeting, according to evidence provided by Wylie. I should emphasise that the Kogan/GSR datasets and questionnaires were not the only Facebook-connected questionnaires and datasets which Cambridge Analytica used. I am aware in a general sense of a wide range of surveys which were done by CA or its partners, usually with a Facebook login – for example, the "sex compass" quiz.
The UK's first major Parliamentary inquiry into Artificial Intelligence has called for a new cross-sector ethics code to ensure that the country becomes a world leader in AI. Lord Clement-Jones, the Chairman of The House of Lords Select Committee on Artificial Intelligence, told Techworld that an ethical approach was essential to ensure public support for AI. "What we want is to make sure that the public is fully trusting in this technology, and you can only do that if they believe it's for the benefit of them and others when they're being applied, and also that it's transparent and unbiased in its application," he said. The proposed "AI Code" could attract public support by creating consistent guidelines for developing and using AI across all organisations and companies in both the public and private sectors. In a report titled AI in the UK: Ready, Willing and Able?, the committee set out five principles to form the basis of the code, which could be adopted internationally: This AI code could provide the basis for future statutory regulation, but the committee stopped short of recommending new regulation specifically for AI at this point.
The UK's House of Lords Select Committee on Artificial Intelligence, has finally published its major report into the impact, regulation and use of AI, called'AI in the UK: ready, willing and able?' after taking evidence from multiple experts across many sectors, including from the legal industry. For those who have been following the debate closely it will come as no surprise that the report understandably sticks to the main themes of jobs and ethics. Although'the law' appears many times as a subject related to liability, transparency and ethics, there is little about the actual business of law and AI, or the New Wave of legal technology that is driven by AI tech such as NLP and machine learning. Although, again, there are references to the need for transparency in any algorithms that may help make decisions related to a defendant in a court case. The Lords committee also have asked the UK's Law Commission to review whether existing laws on liability cover errors created by an AI system.