Sweeping changes to England's planning system will "cut red tape, but not standards," Housing Secretary Robert Jenrick has said. Under draft new laws, first revealed on Sunday, developers will be granted "automatic" permission to build homes and schools on sites for "growth". It follows Boris Johnson's pledge to "build back better" after coronavirus. But critics warn it could lead to "bad-quality housing" and loss of local control over development. Mr Johnson promised to speed up investment into homes and infrastructure in June to help the UK recover from the economic impact of coronavirus.
Data prepper Tamr Inc. will assist the U.S. Air Force in boosting utilization of its air assets under a five-year contract designed to use machine learning techniques to accelerate the flight certification process for new aircraft configurations. Those configurations include equipping front-line aircraft with new weapons, sensors and defenses such as electronic warfare pods. Tamr said the contract with the Air Force's Seek Eagle Office could be worth as much $60 million. The office based at Eglin Air Force Base, Fla., is responsible for integration new technologies into front-line aircraft. The Air Force office will use Tamr's machine learning platform to organize more than 30 years of aircraft performance studies dispersed across the organization.
PHILADELPHIA - To answer medical questions that can be applied to a wide patient population, machine learning models rely on large, diverse datasets from a variety of institutions. However, health systems and hospitals are often resistant to sharing patient data, due to legal, privacy, and cultural challenges. An emerging technique called federated learning is a solution to this dilemma, according to a study published Tuesday in the journal Scientific Reports, led by senior author Spyridon Bakas, PhD, an instructor of Radiology and Pathology & Laboratory Medicine in the Perelman School of Medicine at the University of Pennsylvania. Federated learning -- an approach first implemented by Google for keyboards' autocorrect functionality -- trains an algorithm across multiple decentralized devices or servers holding local data samples, without exchanging them. While the approach could potentially be used to answer many different medical questions, Penn Medicine researchers have shown that federated learning is successful specifically in the context of brain imaging, by being able to analyze magnetic resonance imaging (MRI) scans of brain tumor patients and distinguish healthy brain tissue from cancerous regions.
The creation of the Global Partnership on Artificial Intelligence (GPAI) reflects the growing interest of states in AI technologies. The initiative, which brings together 14 countries and the European Union, will help participants establish practical cooperation and formulate common approaches to the development and implementation of AI. At the same time, it is a symptom of the growing technological rivalry in the world, primarily between the United States and China. Russia's ability to interact with the GPAI may be limited for political reasons, but, from a practical point of view, cooperation would help the country implement its national AI strategy. The Global Partnership on Artificial Intelligence (GPAI) was officially launched on June 15, 2020, at the initiative of the G7 countries alongside Australia, India, Mexico, New Zealand, South Korea, Singapore, Slovenia and the European Union. According to the Joint Statement from the Founding Members, the GPAI is an "international and multistakeholder initiative to guide the responsible development and use of AI, grounded in human rights, inclusion, diversity, innovation, and economic growth."
A former Google engineer has been sentenced to 18 months in prison after pleading guilty to stealing trade secrets before joining Uber's effort to build robotic vehicles for its ride-hailing service. The sentence handed down Tuesday by U.S. District Judge William Alsup came more than four months after former Google engineer Anthony Levandowski reached a plea agreement with the federal prosecutors who brought a criminal case against him last August. Levandowski, who helped steer Google's self-driving car project before landing at Uber, was also ordered to pay more than $850,000. Alsup had taken the unusual step of recommending the Justice Department open a criminal investigation into Levandowski while presiding over a high-profile civil trial between Uber and Waymo, a spinoff from a self-driving car project that Google began in 2007 after hiring Levandowski to be part of its team. Levandowski eventually became disillusioned with Google and left the company in early 2016 to start his own self-driving truck company, called Otto, which Uber eventually bought for $680 million. He wound up pleading guilty to one count, culminating in Tuesday's sentencing.
There are many photos of Tom Hanks, but none like the images of the leading everyman shown at the Black Hat computer security conference Wednesday: They were made by machine learning algorithms, not a camera. Philip Tully, a data scientist at security company FireEye, generated the hoax Hankses to test how easily open source software from artificial intelligence labs could be adapted to misinformation campaigns. His conclusion: "People with not a lot of experience can take these machine learning models and do pretty powerful things with them," he says. Seen at full resolution, FireEye's fake Hanks images have flaws like unnatural neck folds and skin textures. But they accurately reproduce the familiar details of the actor's face like his brow furrows and green-gray eyes, which gaze cooly at the viewer.
The U.S. Department of Education's Institute of Education Sciences has awarded the National Center for Research on Education Access and Choice (REACH) at Tulane University a $100,000 contract to collect data from approximately 150,000 school websites across the country to see how the nation's education system is responding to the coronavirus pandemic. The project, which will track traditional public schools, charter schools and private schools, aims to quickly answer questions that are critical for understanding how students are learning when school buildings are closed. Key questions include: how many schools are providing any kind of instructional support; which are delivering online instruction; what resources are they offering to students and how do students stay in contact with teachers? "This data will also help answer important questions about equity in the school system, showing how responses differ according to characteristics like spending levels, student demographics, internet access, and if there are differences based on whether it is a private, charter or traditional public school," said REACH National Director Douglas N. Harris, Schlieder Foundation Chair in Public Education and chair of economics at Tulane University School of Liberal Arts. REACH will work in cooperation with Nicholas Mattei, assistant professor of computer science at Tulane University School of Science and Engineering, to create a computer program that will collect data from every school and district website in the country.
From targeted phishing campaigns to new stalking methods: there are plenty of ways that artificial intelligence could be used to cause harm if it fell into the wrong hands. A team of researchers decided to rank the potential criminal applications that AI will have in the next 15 years, starting with those we should worry the most about. By using fake audio and video to impersonate another person, the technology can cause various types of harms, said the researchers. The threats range from discrediting public figures to influence public opinion, to extorting funds by impersonating someone's child or relatives over a video call. The ranking was put together after scientists from University College London (UCL) compiled a list of 20 AI-enabled crimes based on academic papers, news and popular culture, and got a few dozen experts to discuss the severity of each threat during a two-day seminar.
To help build a draft resolution on how AI can be developed and deployed, UNESCO is seeking global policymakers and AI experts. The United Nations Educational, Scientific and Cultural Organisation (UNESCO) has said that there is an urgent need for a global instrument on the ethics of AI to ensure those who it is used by and used with are treated fairly and equally. Now it has announced the launch of a global online consultation led by a group of 24 experts in AI charged with writing a first draft on a'Recommendation on the Ethics of AI' document. It's hoped that UNESCO member states would adopt its recommendations by November 2021, thereby becoming the first global normative instrument to address the developments and applications of AI. If the recommendation is adopted, these nations will be invited to submit periodic reports every four years on the measures that they have adopted.
This more than doubles the startup's total raised, and a spokesperson says it will be used to accelerate Sight's operations globally -- with a focus on the U.S. -- as Sight advances R&D for the detection of conditions like sepsis and cancer, as well as factors affecting COVID-19. Blood tests are generally unpleasant -- not to mention costly. On average, getting blood work done at a lab costs uninsured patients between $100 and $1,500. In the developing world, where the requisite equipment isn't always readily available, ancillary costs threaten to drive the price substantially higher. That's why Yossi Pollak, previously at Intel subsidiary Mobileye, and Daniel Levner, a former scientist at Harvard's Wyss Institute for Biologically Inspired Engineering, founded Sight Diagnostics in 2011.