Not enough data to create a plot.
Try a different view from the menu above.
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Artificial intelligence (AI) is highly effective at parsing extreme volumes of data and making decisions based on information that is beyond the limits of human comprehension. But it suffers from one serious flaw: it cannot explain how it arrives at the conclusions it presents, at least, not in a way that most people can understand. This "black box" characteristic is starting to throw some serious kinks in the applications that AI is empowering, particularly in medical, financial and other critical fields, where the "why" of any particular action is often more important than the "what." This is leading to a new field of study called explainable AI (XAI), which seeks to infuse AI algorithms with enough transparency so users outside the realm of data scientists and programmers can double-check their AI's logic to make sure it is operating within the bounds of acceptable reasoning, bias and other factors.
Less than a year ago, Anthropic was founded by former OpenAI VP of research Dario Amodei, intending to perform research in the public interest on making AI more reliable and explainable. Its $124 million in funding was surprising then, but nothing could have prepared us for the company raising $580 million less than a year later. "With this fundraise, we're going to explore the predictable scaling properties of machine learning systems, while closely examining the unpredictable ways in which capabilities and safety issues can emerge at-scale," said Amodei in the announcement. His sister Daniela, with whom he co-founded the public benefit corporation, said that having built out the company, "We're focusing on ensuring Anthropic has the culture and governance to continue to responsibly explore and develop safe AI systems as we scale." Because that's the problem category Anthropic was formed to examine: how to better understand the AI models increasingly in use in every industry as they grow beyond our ability to explain their logic and outcomes.
Ok, sure, machine learning is great for making predictions; But you can't use it to replace scientific theory. Not only will it fail to reach generalizable conclusions, but the result is going to lack elegance and explainability. We won't be able to understand it or build upon it! What makes an algorithm or theory or agent explainable? It's certainly not the ability o "look inside"; We're rather happy assuming that block boxes, such as brains, are capable of explaining their conclusions and theories. We scoff at the idea that perfectly transparent neural networks are "explainable" in a meaningful sense; So it's not visibility that makes something explainable.
NSA Border Security Committee Chair Sheriff Mark Dannels on the border crisis and potential for Title 42 to be extended. Sen. Susan Collins, R-Maine, on Tuesday pressed Attorney General Merrick Garland on "conflicting positions" the Biden administration has held on a number of issues, including COVID-19 mandates and border security. During a Senate Appropriations Committee hearing, Collins asked Garland how the administration could justify its conclusion that the pandemic had "subsided enough to warrant the termination of Title 42" while at the same time arguing that "the public health consequences are dire enough to warrant compelled mask usage by Americans on public transportation." Sen. Susan Collins, R-Maine, questions Attorney General Merrick Garland during a Senate Appropriations Subcommittee on Commerce, Justice, Science, and Related Agencies hearing to discuss the fiscal year 2023 budget of the Department of Justice at the Capitol in Washington, DC, on April 26, 2022. In his response to Collins, Garland insisted that the role of the Justice Department is "not to make judgments about the public health and really not to make judgments about policy," but to instead "make determinations of whether the programs and requests of the agencies that are responsible for those are lawful."
A trial in which trainee teachers who were being taught to identify pupils with potential learning difficulties had their work'marked' by artificial intelligence has found the approach significantly improved their reasoning. It suggests that artificial intelligence (AI) could enhance teachers' "diagnostic reasoning": the ability to collect and assess evidence about a pupil, and draw appropriate conclusions so they can be given tailored support. During the trial, trainees were asked to assess six fictionalised "simulated" pupils with potential learning difficulties. They were given examples of their schoolwork, as well as other information such as behaviour records and transcriptions of conversations with parents. They then had to decide whether or not each pupil had learning difficulties such as dyslexia or Attention Deficit Hyperactivity Disorder (ADHD), and explain their reasoning.
The US Federal Aviation Administration has revoked a YouTuber's pilot license after it concluded that he intentionally crashed his plane for the sake of gaining online views. On 24 November 2021, Trevor Jacob was flying over California's Los Padres national forest in his small turboprop plane when his propeller stopped working. "I'm over the mountains and I … have an engine out," Jacob said into his camera while sitting in the cockpit. He then proceeded to jump out of the plane, filming himself using a selfie stick before landing with his parachute into an open field. Jacob filmed the whole incident and uploaded it to YouTube in a video titled I Crashed My Plane.
Just like a coin, explainability in AI has two faces -- one it shows to the developers (who actually build the models) and the other to the users (the end customers). The former face (IE i.e. intrinsic explainability) is a technical indicator to the builder that explains the working of the model. Whereas the latter (EE i.e. extrinsic explainability) is proof to the customers about the model's predictions. While IE is required for any reasonable model improvement, we need EE for factual confirmation. A simple layman who ends up using the model's prediction needs to know why is the model suggesting something.
Artificial intelligence (AI) systems power the world we live in. Deep neural networks (DNNs) are able to solve tasks in an ever-expanding landscape of scenarios, but our eagerness to apply these powerful models leads us to focus on their performance and deprioritises our ability to understand them. Current research in the field of explainable AI tries to bridge this gap by developing various perturbation or gradient-based explanation techniques. For images, these techniques fail to fully capture and convey the semantic information needed to elucidate why the model makes the predictions it does. In this work, we develop a new form of explanation that is radically different in nature from current explanation methods, such as Grad-CAM.
Gutfeld and guests discuss how the Biden administration is extending the federal mask mandate for another 15 days on'Gutfeld!' A federal judge on Monday voided the Biden administration's mask mandate for travelers using public transportation such as trains and airplanes. The mandate from the Centers for Disease Control and Prevention applies to people as young as 2 years old, and had been set to expire a number of times but was recently extended to May 3 before Monday's ruling. US COLLEGES REINSTATE MASK REQUIREMENTS, BUT EXPERT SAYS'THE TIME FOR MASK MANDATES IS GONE' The ruling from U.S.. District Court Judge Kathryn Kimball Mizelle, came in a case brought in Florida federal court by Health Freedom Defense Fund, Inc. and frequent air travelers Ana Daza and Sarah Pope against the administration. Judge Mizelle determined that the mandate violated the Administrative Procedure Act by being outside the scope of the CDC's authority, was "arbitrary" and "capricious" and not going through the required notice and comment period for federal rulemaking.
DARPA, In-Q-Tel, US National Laboratories (examples: Argonne, Oak Ridge) are famous government funding agencies for deep tech on the forward boundaries, the near impossible, that have globally transformative solutions. The Internet is a prime example where more than 70% of the 7.8 billion population are online in 2022, closing in on 7 hours daily mobile usage, and global wealth of $500 Trillion is powered by the Internet. There is convergence between the early bets led by government funding agencies and the largest corporations and their investments. An example is from 2015, where I was invited to help the top 100 CEOs, representing nearly $100 Trillion in assets under management, to look ten years into the future for their investments. The resulting working groups, and private summits resulted in the member companies investing in all the areas identified: quantum computing, block chain, cybersecurity, big data, privacy and data, AI/ML, future in fintech, financial inclusion, ...