Goto

Collaborating Authors

 sabine


AIhub coffee corner: Agentic AI

AIHub

This month we tackle the topic of agentic AI. Joining the conversation this time are: Sanmay Das (Virginia Tech), Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol), Sarit Kraus (Bar-Ilan University), and Michael Littman (Brown University). Why is it taking off? Sanmay, perhaps you could kick off with what you noticed at AAMAS [the Autonomous Agents and Multiagent Systems conference]? Sanmay Das: It was very interesting because obviously there's suddenly been an enormous interest in what an agent is and in the development of agentic AI.


AIhub coffee corner: Bad practice in the publication world

AIHub

This month we tackle the topic of bad practice in the sphere of publication. Joining the conversation this time are: Sanmay Das (Virginia Tech), Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol), and Sarit Kraus (Bar-Ilan University). Sabine Hauert: Today's topic is bad practice in the publication world. For example, people trying to cheat the review system, paper mills. What bad behaviors have you seen, and is it really a problem? Tom Dietterich: Well, I can talk about it from an arXiv point of view.


AIhub coffee corner: Is it the end of GenAI hype?

AIHub

There has been a string of articles recently about the end of generative AI hype. Our experts consider whether or not the bubble has burst. Joining the conversation this time are: Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol), Michael Littman (Brown University), and Marija Slavkovik (University of Bergen). Sabine Hauert: There have been a number of recent articles in the mainstream media talking about the fact that AI has not made any money, and that it might be all hype, or a bubble. Marija Slavkovik: There is this article by Cory Doctorow which asks what kind of bubble AI is. I really like his take that a lot of bubbles come and go; some of them leave us something useful and some of them just generate something for a brief moment in time, like excellent revenue for the investment bankers for example.


AIhub coffee corner: Responsible and trustworthy AI

AIHub

This month, our trustees tackle the topic of trustworthy AI. Joining the conversation this time are: Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol), and Sarit Kraus (Bar-Ilan University). Sabine Hauert: There was a big trustworthy autonomous systems conference a few weeks back in London, and on the back of that they've launched a big responsible AI portfolio. I know Europe has been focusing on trustworthiness and how responsible these algorithms are. Deploying these systems in a responsible way is something that people are thinking about more and more. It was interesting at that conference because, while a lot of it had to do with ethics, interfacing with humans and thinking holistically about these algorithms, there was also a strong military track discussing how you make military tools trustworthy. I always find it quite interesting that trustworthiness and responsible AI mean completely different things to different communities.


AIhub coffee corner: Regulation of AI

AIHub

Three years ago, our trustees sat down to discuss AI and regulation. A lot has happened since then, both on the technological development front and on the policy front, so we thought it was time to tackle the topic again. You can read more about that here.] Joining the conversation this time are: Sabine Hauert (University of Bristol), Sarit Kraus (Bar-Ilan University), Michael Littman (Brown University), and Carles Sierra (CSIC). Sabine Hauert: Regulation of AI was a very hot topic a few months ago, and interest has definitely not died down.


AIhub coffee corner: AI risks, pause letters and the ensuing discourse

AIHub

This month, in light of the recent prominent discussions relating to perceived AI risks, we consider the pause letters and risk statements, the debate around existential threats, and how this discourse could impact the field and public perceptions. Joining the discussion this time are: Sanmay Das (George Mason University), Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol), Sarit Kraus (Bar-Ilan University), Anna Tahovská (Czech Technical University), and Oskar von Stryk (Technische Universität Darmstadt). Sabine Hauert: In today's discussion we're going to talk about potential AI risks and the recent discourse around existential threats. Does anyone have any hot reactions? How do you feel about the discourse of existential threat? Tom Dietterich: I agree with Emily Bender and a lot of the critics that it's a distraction and a diversion from thinking about the more immediate threats.


AIhub coffee corner: Large language models for scientific writing

AIHub

The recent launches of two large language models, ChatGPT and Galactica, have led to much interest and controversy amongst the AI community, and beyond. These models, and in particular their potential use for writing scientific articles (and essays), provided the inspiration for this month's discussion. Joining the discussion this time are: Sabine Hauert (University of Bristol), Sarit Kraus (Bar-Ilan University), Michael Littman (Brown University), and Lucy Smith (AIhub). Sabine Hauert: Has anyone had a chance to use any of these new models yet? Sarit Kraus: During the summer I played with the previous version of GPT. Have you tried the latest version, Michael?


AIhub coffee corner: Is AI-generated art devaluing the work of artists?

AIHub

This month, we tackle the topic of AI-generated art and what this means for artists. Joining the discussion this time are: Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol), Sarit Kraus (Bar-Ilan University), Michael Littman (Brown University), Lucy Smith (AIhub), Anna Tahovská (Czech Technical University), and Oskar von Stryk (Technische Universität Darmstadt). Sabine Hauert: This month our topic is AI-generated art. There are lots of questions relating to the value of the art that's generated by these AI systems, whether artists should be working with these tools, and whether that devalues the work that they do. Lucy Smith: I was interested in this case, whereby Shutterstock is now going to sell images created exclusively by OpenAI's DALL-E 2. They say that they're going to compensate the artists whose work they used in training the model, but I don't know how they are going to work out how much each training image has contributed to each created image that they sell.


Leveraging Unlabeled Image Data With Self-Supervised Learning or Pseudo Labeling With Mateusz Opala - neptune.ai

#artificialintelligence

This article was originally an episode of MLOps Live, an interactive Q&A session where ML practitioners answer questions from other ML practitioners. Every episode is focused on one specific ML topic, and during this one, we talked to Mateusz Opala about leveraging unlabeled image data with self-supervised learning or pseudo-labeling. But, if you prefer a written version, here it is! Sabine: With us today, we have Mateusz Opala, who is going to be answering questions about leveraging unlabeled image data with self-supervised learning or pseudo-labeling. Sabine: It's great to have you. Mateusz has held a number of leading machine learning positions at companies like Netguru and Brainly. So, Mateusz, you have a background in computer science, but how did you get more into the machine learning side of things? Mateusz: It all started during my sophomore year at university. One of my professors told me that Andrew Ng was doing his first iteration of the famous course on machine learning on Coursera. I kind of started from there, then did a bachelor thesis on deep unsupervised learning and went to Siemens to work in deep learning, and then all my positions were strictly about machine learning. Sabine: You've been on that path ever since? I worked for some time before as a backend engineer. But for most of the time in my career, I was a machine learning engineer/data scientist. Sabine: Mateusz, to warm you up.


AIhub coffee corner: can AI make humans better?

AIHub

This month, we ask if AI can make humans better. Joining the discussion this time are: Joe Daly (AIhub and University of Bristol), Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol), Sarit Kraus (Bar-Ilan University), Michael Littman (Brown University), Lucy Smith (AIhub) and Oskar von Stryk (Technische Universität Darmstadt). Joe Daly: I recently saw this Twitter thread, about how AI has made human players better at the game of Go, then this article about the game of bridge, and more generally about AI's influence on us. People were actually discussing how AI can make us better at stuff, and how we can learn things from AI. What are people's thoughts on that?

  Country:
  Genre: Personal > Interview (0.47)
  Industry: Leisure & Entertainment > Games > Go (0.49)