These are exciting times for computational sciences with the digital revolution permeating a variety of areas and radically transforming business, science, and our daily lives. The Internet and the World Wide Web, GPS, satellite communications, remote sensing, and smartphones are dramatically accelerating the pace of discovery, engendering globally connected networks of people and devices. The rise of practically relevant artificial intelligence (AI) is also playing an increasing part in this revolution, fostering e-commerce, social networks, personalized medicine, IBM Watson and AlphaGo, self-driving cars, and other groundbreaking transformations. Unfortunately, humanity is also facing tremendous challenges. Nearly a billion people still live below the international poverty line and human activities and climate change are threatening our planet and the livelihood of current and future generations. Moreover, the impact of computing and information technology has been uneven, mainly benefiting profitable sectors, with fewer societal and environmental benefits, further exacerbating inequalities and the destruction of our planet. Our vision is that computer scientists can and should play a key role in helping address societal and environmental challenges in pursuit of a sustainable future, while also advancing computer science as a discipline. For over a decade, we have been deeply engaged in computational research to address societal and environmental challenges, while nurturing the new field of Computational Sustainability.
Science has always hinged on the idea that researchers must be able to prove and reproduce the results of their research. Simply put, that is what makes science...science. Yet in recent years, as computing power has increased, the cloud has taken shape, and data sets have grown, a problem has appeared: it has becoming increasingly difficult to generate the same results consistently--even when researchers include the same dataset. "One basic requirement of scientific results is reproducibility: shake an apple tree, and apples will fall downwards each and every time," observes Kai Zhang, an associate professor in the department of statistics and operations research at The University of North Carolina, Chapel Hill. "The problem today is that in many cases, researchers cannot replicate existing findings in the literature and they cannot produce the same conclusions. This is undermining the credibility of scientists and science. It is producing a crisis."
The concept of randomness is easy to grasp on an intuitive level but challenging to characterize in rigorous mathematical terms. In "Algorithmic Randomness" (May 2019), Rod Downey and Denis R. Hirschfeldt present a comprehensive discussion of this issue, incorporating the distinct perspectives of "statisticians, coders, and gamblers." Randomness is also a concern to "modelers" who depend on simulation models driven by random number generators or analytic models built using probabilistic assumptions. In such cases, the underlying mathematical model is often an ergodic stochastic process, and the issue is whether the output of the simulator's random number generator or the observed behavior of the real-world system being modeled is "random enough" to establish confidence in the model's predictions. In a sense, this highly pragmatic perspective represents a less restrictive approach to the issue of randomness: if any of the strong criteria described by the authors are satisfied, the output of the simulator's random number generator or the observed behavior of the system being modeled should be sufficiently random to establish confidence in a model's predictions.
The Institute for the Future (IFTF) in Palo Alto, CA, is a U.S.-based think tank. It was established in 1968 as a spin-off from the RAND Corporation to help organizations plan for the long-term future. Roy Amara, who passed away in 2007, was IFTF's president from 1971 until 1990. Amara is best known for coining Amara's Law on the effect of technology: "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run." This law is best illustrated by the Gartner Hype Cycle,a characterized by the "peak of inflated expectations," followed by the "trough of disillusionment," then the "slope of enlightenment," and, finally, the "plateau of productivity."
WASHINGTON – Amazon, Microsoft and Intel are among leading tech companies putting the world at risk through killer robot development, according to a report that surveyed major players from the sector about their stance on lethal autonomous weapons. Dutch NGO Pax ranked 50 companies by three criteria: whether they were developing technology that could be relevant to deadly AI, whether they were working on related military projects, and if they had committed to abstaining from contributing in the future. "Why are companies like Microsoft and Amazon not denying that they're currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?" The use of AI to allow weapon systems to autonomously select and attack targets has sparked ethical debates in recent years, with critics warning they would jeopardize international security and herald a third revolution in warfare after gunpowder and the atomic bomb. A panel of government experts debated policy options regarding lethal autonomous weapons at a meeting of the United Nations Convention on Certain Conventional Weapons in Geneva on Wednesday.
In another sign that the future of work is already here, JPMorgan Chase has signed a five-year deal with a software startup that uses artificial intelligence to write marketing copy, following a successful pilot with the technology. In tests, JPMorgan Chase found that Persado's machine-learning tool crafted better ad copy than its own writers could muster, as measured by the higher click rates--more than double in some case--on digital ads for Chase cards and mortgages. In one such matchup, an ad written by a human read, "Access cash from the equity in your home." The more successful version, from Persado, read, "It's true--You can unlock cash from the equity in your home." "Persado's technology is incredibly promising," Kristin Lemkau, chief marketing officer at JPMorgan Chase, said in a statement.
AI should be built on rigorous knowledge... Note: This is a follow-up to an earlier article on causal machine learning, "AI Needs More Why". There's much to be excited about with artificial intelligence (AI) in healthcare: Google AI is improving the workflow of clinicians with predictive models for diabetic retinopathy , many new approaches are achieving expert-level performance in tasks such as classification of skin cancer , and others surpassing the capabilities of doctors -- notably the recent report of DeepMind's AI for predicting acute kidney disease, capable of detecting potentially fatal kidney injuries 48 hours before symptoms are recognized by doctors . Yet medical practitioners and researchers at the intersection of machine learning (ML) and medicine are quick to point out these successes are not representative of the more nuanced, non-trivial challenges presented by medical research and clinical applications. These ML success stories (notably all deep learning) are disease prediction problems, learning patterns that map well-defined inputs to well-labeled outputs . Domains where instinctive pattern recognition works powerfully are what psychologist Robin Hogarth termed "kind learning environments" .
On August 9, 2019, Illinois Governor J. B. Pritzker signed into law first-of-its-kind legislation regulating the use of artificial intelligence (AI) in Illinois. As previously reported by Troutman Sanders on June 26, 2019, the Illinois legislature, in what has been described as the most momentous legislative session in decades, passed the privacy statute aimed at regulating an ever-growing issue in HR: the use of AI in the hiring process. With the Governor's signature, the statute will become effective January 1, 2020. While the use of AI in the employment decision-making process might sound futuristic, many U.S. companies already use AI to streamline hiring and make the process more objective, including scanning resumes, scheduling interviews, and recently, actually conducting the first round of job interviews. These AI interviewing programs have different algorithms and methods, but essentially, they measure an applicant's facial expression, word choice, body language, and vocal tone, among other factors.
Keep your floors clean without lifting a finger. If you make a purchase by clicking one of our links, we may earn a small share of the revenue. However, our picks and opinions are independent from USA Today's newsroom and any business incentives. By this time every summer, my floors look like a hot mess. All those times when I should have been inside doing maintenance cleaning, I was too busy running around the beach and having a blast.
Fox News Flash top headlines for August 19 are here. Check out what's clicking on Foxnews.com Bernie Sanders has called for a complete ban on the police use of facial recognition. The Vermont senator's proposal to "ban the use of facial recognition software for policing" is part of his broader criminal justice reform agenda. Facial recognition technology has drawn the ire of lawmakers on both sides of the aisle, some of whom have called for a "time out" on its development.