Recently, a book of recipes called Hyperfoods was released, comprised of recipes generated through the assistance of AI and machine learning algorithms. The recipes in the book are based on foods with anti-cancer properties. Artificial intelligence is seeing increasing use in the creation of food recipes. For example, companies like Analytical Flavor Systems have been using AI to analyze the flavor and textures of drinks and attempt to design drinks catering to specific locales. Meanwhile, Plant Jammer is an app that leverages artificial intelligence to suggest recipes based on the ingredients you have in your house.
The 32-year-old is the only person to have won four World Poker Tour titles and has earned more than $7 million at tournaments. Despite his expertise, he learned something new this spring from an artificial intelligence bot. Elias was helping test new soft ware from researchers at Carnegie Mellon University and Facebook. He and another pro, Chris "Jesus" Ferguson, each played 5,000 hands over the internet in six-way games against five copies of a bot called Pluribus. At the end, the bot was ahead by a good margin.
Abstract: Artificial intelligence (AI) is widely recognised as a transformative innovation and is already proving capable of outperforming human clinicians in the diagnosis of specific medical conditions, especially in image analysis within dermatology and radiology. These abilities are enhanced by the capacity of AI systems to learn from patient records, genomic information and real-time patient data. Whilst AI research is mounting, less attention has been paid to the practical implications on healthcare services and potential barriers to implementation. AI is recognised as a "Software as a Medical Device (SaMD)" and is increasingly becoming a topic of interest for regulators. Unless the introduction of AI is carefully considered and gradual, there are risks of automation bias, overdependence and long-term staffing problems. This is in addition to already well-documented generic risks associated with AI, such as data privacy, algorithmic biases and corrigibility.
For the baby boomer generation and Gen Xers, the goal was to go to a traditional university, receive an education, and then find employment with an established organization they could work with for the rest of their lives. Millennials and generation Z seem less set on traditional university training. They definitely value higher education, but they are looking for alternative ways to receive said education. If they can get a degree without relying on a full time on-campus program, they will opt for that more times than not. As the expense associated with higher education continues to rise, it seems like it attracts more students to distance learning.
The news: An international consortium of medical experts has introduced the first official standards for clinical trials that involve artificial intelligence. The move comes at a time when hype around medical AI is at a peak, with inflated and unverified claims about the effectiveness of certain tools threatening to undermine people's trust in AI overall. What it means: Announced in Nature Medicine, the British Medical Journal, and the Lancet, the new standards extend two sets of guidelines around how clinical trials are conducted and reported that are already used around the world for drug development, diagnostic tests, and other medical interventions. AI researchers will now have to describe the skills needed to use an AI tool, the setting in which the AI is evaluated, details about how humans interact with the AI, the analysis of error cases, and more. Why it matters: Randomized controlled trials are the most trustworthy way to demonstrate the effectiveness and safety of a treatment or clinical technique.
COCIR welcomes the inception impact assessment by the European Commission on ethical and legal requirements for Artificial Intelligence (AI) and the opportunity to provide feedback. Continuing our engagement in this area, and following the earlier consultation on the AI White Paper, COCIR is pleased to share its experience and expertise on the use of AI within healthcare. COCIR and its members have recently published a comprehensive in-depth analysis of Artificial Intelligence in Medical Device Legislation. The document provides a thorough analysis of the legal requirements applicable to AI-based medical devices. Based on this analysis COCIR sees no need for novel regulatory frameworks for AI-based medical devices, because the requirements of the EU Medical Device Regulation4 (MDR) in combination with provisions of the General Data Protection Regulation (GDPR) are adequate to ensure excellence and trust in AI in line with European values.
For years, the lidar business has had a lot of hype but not a lot of hard numbers. Dozens of lidar startups have touted their impressive technology, but until recently it wasn't clear who, if anyone, was actually gaining traction with customers. This story originally appeared on Ars Technica, a trusted source for technology news, tech policy analysis, reviews, and more. Ars is owned by WIRED's parent company, Condé Nast. This summer, three leading lidar makers have done major fundraising rounds that included releasing public data on their financial performance.
There are a number of plausible reasons why cheapfakes have outpaced deepfakes in the political domain. One is that, despite their crudeness, cheapfakes spread widely and can capture public debate and discourse. On pure cost-benefit grounds, fakers may opt to get more bang for their buck by using existing, proven techniques for editing and manipulating media. There are also technical reasons: a recent paper by one of us points out that sophisticated machine learning systems still require plenty of time for "training," which can slow the production of a faked video to the point where it is no longer relevant to the rapidly moving social media conversation.
One of the most salient features of our culture is that there is so much bullshit." These are the opening words of the short book On Bullshit, written by the philosopher Harry Frankfurt. Fifteen years after the publication of this surprise bestseller, the rapid progress of research on artificial intelligence is forcing us to reconsider our conception of bullshit as a hallmark of human speech, with troubling implications. What do philosophical reflections on bullshit have to do with algorithms? As it turns out, quite a lot. In May this year the company OpenAI, co-founded by Elon Musk in 2015, introduced a new language model called GPT-3 (for "Generative Pre-trained Transformer 3"). It took the tech world by storm. On the surface, GPT-3 is like a supercharged version of the autocomplete feature on your smartphone; it can generate coherent text based on an initial input. But GPT-3's text-generating abilities go far beyond anything your phone is capable of.
Chess has a reputation for cold logic, but Vladimir Kramnik loves the game for its beauty. "It's a kind of creation," he says. His passion for the artistry of minds clashing over the board, trading complex but elegant provocations and counters, helped him dethrone Garry Kasparov in 2000 and spend several years as world champion. Yet Kramnik, who retired from competitive chess last year, also believes his beloved game has grown less creative. He partly blames computers, whose soulless calculations have produced a vast library of openings and defenses that top flight players know by rote.