Developing responsible, human-centered artificial intelligence (AI) is a complex and resource-intensive task. As governments around the world race to meet the opportunities and challenges of developing AI, there remains an absence of deep, technical international cooperation that allows like-minded countries to leverage one another's resources and competitive advantages to facilitate cutting-edge AI research in a manner that upholds and promotes democratic values. Establishing a Multilateral AI Research Institute (MAIRI) would provide such a venue for force-multiplying AI research and development collaboration. It would also reinforce the United States' leadership as an international hub for basic and applied AI research, the development of AI governance models, and the fostering of AI norms that align with human-centric and democratic values. In its final report published in March 2021, the National Security Commission on Artificial Intelligence (NSCAI) recommended that the United States work closely with key allies and partners to establish a MAIRI and called for congressional authorization and funding to allow the National Science Foundation (NSF) to lead the effort.
Google has become synonymous with powerful search, incredible hardware, and quirky, fun technology. Unfortunately, that includes stretching the limits of privacy and a reputation for giving up on its product lines too soon. But these negatives notwithstanding, Google is at it again at its Google I/O event near its company headquarters in Mountain View, Calif., enticing developers and consumers alike with a number of new hardware products, software and services. Yes, Google just revealed new Pixel phones, including the Pixel 6A and the Pixel 7. But those weren't the coolest technologies Google showed off on Wednesday.
For years, users of Google Maps have had numerous tools to navigate the planet: Street View, 3D representations, and more. Now Google is adding Immersive View, combining real-world imagery and artificial intelligence to make 3D maps even more lifelike. Google made the announcement at Google I/O, its annual developer conference, held for the first time in three years at the Shoreline Ampitheatre in Mountain View, Calif. "Around the world, we've mapped around 1.6 billion buildings, and over 60 million kilometers of roads today," Pichai said. "Some remote and rural areas have previously been difficult to map due to scarcity of high-quality imagery, and distinct building types and terrain.
Chris J. Preimesberger has been researching, reporting and analyzing IT news and trends since 1995, when as editor of an international newsletter, Sun's Hottest, he published an article defining a new protocol called Java. Damage caused by advanced exploits, such as Log4Shell and Spring4Shell, has been widely documented. These came out of nowhere and seemingly crippled many organizations. This happened despite record cybersecurity industry budgets that will clear $146B in 2022. This post from Palo Alto Networks highlights that, based on telemetry, the company observed more than 125 million hits that had the associated packet capture that triggered the signature.
The AI Index is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry. The annual report tracks, collates, distills, and visualizes data relating to artificial intelligence, enabling decision-makers to take meaningful action to advance AI responsibly and ethically with humans in mind. The 2022 AI Index report measures and evaluates the rapid rate of AI advancement from research and development to technical performance and ethics, the economy and education, AI policy and governance, and more. The latest edition includes data from a broad set of academic, private, and non-profit organizations as well as more self-collected data and original analysis than any previous editions. The Global AI Vibrancy Tool is an interactive visualization that allows cross-country comparison for up to 29 countries across 23 indicators.
Recently, we released our report on foundation models, launched the Stanford Center for Research on Foundation Models (CRFM) as part of the Stanford Institute for Human-Centered AI (HAI), and hosted a workshop to foster community-wide dialogue. Our work received an array of responses from a broad range of perspectives; some folks graciously shared their commentaries with us. We see open discourse as necessary for forging the right norms, best practices, and broader ecosystem around foundation models. In this blog post, we talk through why we believe these models are so important and clarify several points in relation to the community response. In addition, we support and encourage further community discussion of these complex issues; feel free to reach out at firstname.lastname@example.org.
Half a billion years ago something remarkable occurred: an astonishing, sudden increase in new species of organisms. Paleontologists call it the Cambrian Explosion, and many of the animals on the planet today trace their lineage back to this event. A similar thing is happening in processors for embedded vision and artificial intelligence (AI) today, and nowhere will that be more evident than at the Embedded Vision Summit, which will be an in–person event held in Santa Clara, California, from May 16–19. The Summit focuses on practical know–how for product creators incorporating AI and vision in their products. These products demand AI processors that balance conflicting needs for high performance, low power, and cost sensitivity.
Artificial intelligence (AI) is no longer just the future of medicine--it is already here, and over time it will transform nearly every area of medical practice, according to experts. AI involves machine learning, where computers get smarter at seeking patterns or connections the more data is input; natural language processing, where computers learn to read and analyze unstructured clinical notes or patient reports; robotic process automation, such as chat bots; diagnostic capabilities such as IBM's Watson; and even more processes that help with patient adherence and administrative tasks. "AI is impacting health care at every level, from the provider to the payer to pharma," according to Dan Riskin, MD, CEO and founder of Verantos, a health care data company in Palo Alto, California, that uses AI to sort through real world evidence. "AI is utilized in a multitude of ways depending on the health care ecosystem," added Athena Robinson, PhD, chief clinical officer at Woebot Labs, a digital therapeutics company in San Francisco. "Some folks think of augmented systems, such as transactional bots that you call to schedule an appointment."
SUNNYVALE, Calif – April 13, 2022 -- Cerebras Systems, the pioneer in high performance artificial intelligence (AI) computing, today released version 1.2 of the Cerebras Software Platform, CSoft, with expanded support for PyTorch and TensorFlow. In addition, customers can now quickly and easily train models with billions of parameters via Cerebras' weight streaming technology. PyTorch is the leading machine learning framework. It is used by developers to accelerate the path from research prototyping to production deployment. As model size increases and as transformer models become more popular, it is essential that machine learning practitioners have access to fast, easy to set up and use compute solutions like the Cerebras CS-2.
In the absence of a national data privacy law in the U.S., California has been more active than any other state in efforts to fill the gap on a state level. The state enacted one of the nation's first data privacy laws, the California Privacy Rights Act (Proposition 24) in 2020, and an additional law will take effect in 2023. A new state agency created by the law, the California Privacy Protection Agency, recently issued an invitation for public comment on the many open questions surrounding the law's implementation. Our team of Stanford researchers, graduate students, and undergraduates examined the proposed law and have concluded that data privacy can be a useful tool in regulating AI, but California's new law must be more narrowly tailored to prevent overreach, focus more on AI model transparency, and ensure people's rights to delete their personal information are not usurped by the use of AI. Additionally, we suggest that the regulation's proposed transparency provision requiring companies to explain to consumers the logic underlying their "automated decision making" processes could be more powerful if it instead focused on providing greater transparency about the data used to enable such processes. Finally, we argue that the data embedded in machine-learning models must be explicitly included when considering consumers' rights to delete, know, and correct their data.