Goto

Collaborating Authors

Quebec


AI firms 'should include members of public on boards to protect society'

The Guardian

Companies developing powerful artificial intelligence systems must have independent board members representing the "interests of society", according to an expert regarded as one of the modern godfathers of the technology. Yoshua Bengio, a co-winner of the 2018 Turing Award – referred to as the "Nobel prize of computing" – said AI firms must have oversight from members of the public, as advances in the technology accelerate rapidly. Speaking in the wake of the boardroom upheaval at the ChatGPT developer OpenAI, including the exit and return of its chief executive, Sam Altman, Bengio said a "democratic process" was needed to monitor developments in the field. "How do we make sure that these advances are happening in a way that doesn't endanger the public? How do we make sure that they're not abused for increasing one's power?" the AI pioneer told the Guardian. "To me, the answer is obvious in principle.


AI Experts Call For Policy Action to Avoid Extreme Risks

TIME - Tech

On Tuesday, 24 AI experts, including Turing Award winners Geoffrey Hinton and Yoshua Bengio, released a paper calling on governments to take action to manage risks from AI. The policy document had a particular focus on extreme risks posed by the most advanced systems, such as enabling large-scale criminal or terrorist activities. The paper makes a number of concrete policy recommendations, such as ensuring that major tech companies and public funders devote at least one-third of their AI R&D budget to projects that promote safe and ethical use of AI. The authors also call for the creation of national and international standards. Bengio, scientific director at the Montreal Institute for Learning Algorithms, says that the paper aims to help policymakers, the media, and the general public "understand the risks, and some of the things we have to do to make [AI] systems do what we want."


A 'Godfather of AI' Calls for an Organization to Defend Humanity

WIRED

This article was syndicated from the Bulletin of the Atomic Scientists, which has covered human-made threats to humanity since 1945. The main artery in Montreal's Little Italy is lined with cafés, wine bars, and pastry shops that spill into tree-lined, residential side streets. Generations of farmers, butchers, bakers, and fishmongers sell farm-to-table goods in the neighborhood's large, open-air market, the Marché Jean-Talon. But the quiet enclave also accommodates a modern, 90,000-square-foot global AI hub known as Mila–Quebec AI Institute. Mila claims to house the largest concentration of deep learning academic researchers in the world, including more than 1,000 researchers and more than 100 professors who work with more than 100 industry partners from around the globe.


Hitting the Books: Why we haven't made the 'Citizen Kane' of gaming

Engadget

Steven Spielberg's wholesome sci-fi classic, E.T. the Extra-Terrestrial, became a cultural touchstone following its release in 1982. The film's hastily-developed (as in, "you have five weeks to get this to market") Atari 2600 tie-in game became a cultural touchstone for entirely different reasons. In his new book, The Stuff Games Are Made Of, experimental game maker and assistant professor in design and computation arts at Concordia University in Montreal, Pippin Barr deconstructs the game design process using an octet of his own previous projects to shed light on specific aspects of how games could better be put together. In the excerpt below, Dr. Barr muses in what makes good cinema versus games and why the storytelling goals of those two mediums may not necessarily align. Excerpted from The Stuff Games Are Made Of by Pippin Barr.


#AIES2023 – panel discussion on large language models

AIHub

The sixth AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES) took place in Montreal, Canada, from 8-10 August 2023. The three-day event included keynote talks, contributed talks and poster sessions. There were also two panel discussions. The session was moderated by Alex John London (Carnegie Mellon University), and the panellists were: Roxana Daneshjou (Stanford), Atoosa Kasirzadeh (University of Edinburgh), Kate Larson (University of Waterloo) and Gary Marchant (Arizona State University). The panellists began by talking about some of their hopes for large languages models.


$7,000 a day for five catchphrases: the TikTokers pretending to be 'non-playable characters'

The Guardian

If you haven't seen them yet, the videos are mesmerizing. A content creator with long, straight hair sits at her kitchen table, rapidly stringing together nonsense catchphrases, over and over with the same cheerful expression and tone. Ooh, you got me feeling like a cowgirl. The trend is called "NPC streaming" – named after the non-playable characters in video games that awkwardly repeat pre-programmed phrases and movements. Its most recognizable face is Pinkydoll, a Montreal content creator whose "ice cream so good" clips went viral this week.


Five ways AI could improve the world: 'We can cure all diseases, stabilise our climate, halt poverty'

The Guardian

Recent advances such as Open AI's GPT-4 chatbot have awakened the world to how sophisticated artificial intelligence has become and how rapidly the field is advancing. Could this powerful new technology help save the world? We asked five leading AI researchers to lay out their best-case scenarios. In 1999, I predicted that computers would pass the Turing test [and be indistinguishable from human beings] by 2029. Stanford university found that alarming, and organised an international conference – experts came from all over the world.


The End Is Not Clear

Communications of the ACM

In his January 2023 Communications Viewpoint, "The End of Programming," Matt Welsh wrote "nobody actually understands how large AI models work." However, already no one person understands existing large computer systems. Indeed, no team of people understands them. Staff turnover and other practicalities of real life mean not even the team that wrote them originally (should it still exist) nor the team currently responsible for maintaining them, fully understands large software systems, which can now exceed a billion lines of code. And yet such systems are in worldwide daily use and deliver economic benefits.


Bernie Sanders, Elon Musk and White House seeking my help, says 'godfather of AI'

The Guardian

The man often touted as the godfather of artificial intelligence will be responding to requests for help from Bernie Sanders, Elon Musk and the White House, he says, just days after quitting Google to warn the world about the risk of digital intelligence. Dr Geoffrey Hinton, 75, won computer science's highest honour, the Turing award, in 2018 for his work on "deep learning", along with Meta's Yann Lecun and the University of Montreal's Yoshua Bengio. The technology, which now underpins the AI revolution, came about as a result of Hinton's efforts to understand the human brain – efforts which convinced him that digital brains might be about to supersede biological ones. But the London-born psychologist and computer scientist might not offer the advice the powerful want to hear. "The US government inevitably has a lot of concerns around national security. And I tend to disagree with them," he told the Guardian.


From pope's jacket to napalm recipes: how worrying is AI's rapid growth?

The Guardian

When the boss of Google admits to losing sleep over the negative potential of artificial intelligence, perhaps it is time to get worried. Sundar Pichai told the CBS programme 60 Minutes this month that AI could be "very harmful" if deployed wrongly, and was developing fast. "So does that keep me up at night? Google has launched Bard, a chatbot to rival the ChatGPT phenomenon, and its parent, Alphabet, owns the world-leading DeepMind, a UK-based AI company. He is not the only AI insider to voice concerns. Last week, Elon Musk said he had fallen out with the Google co-founder Larry Page because Page was "not taking AI safety seriously enough". Musk told Fox News that Page wanted "digital superintelligence, basically a digital god, if you will, as soon as possible". So how much of a danger is posed by unrestrained AI development? Musk is one of thousands of signatories to a letter published by the Future of Life Institute, a thinktank, that called for a six-month moratorium on the creation of "giant" AIs more powerful than GPT-4, the system that underpins ChatGPT and the chatbot integrated with Microsoft's Bing search engine. The risks cited by the letter include "loss of control of our civilization". The approach to product development shown by AI practitioners and the tech industry would not be tolerated in any other field, said Valérie Pisano, another signatory to the letter. Pisano, the chief executive of Mila – the Quebec Artificial Intelligence Institute – says work was being carried out to make sure that these systems were not racist or violent, in a process known as alignment (ie, making sure they "align" with human values). But then they were released into the public realm. "The technology is put out there, and as the system interacts with humankind, its developers wait to see what happens and make adjustments based on that.