Goto

Collaborating Authors

AI-Alerts


Microsoft confirms multibillion dollar investment in firm behind ChatGPT

The Guardian

Microsoft has announced a deepening of its partnership with the company behind the artificial intelligence program ChatGPT by announcing a multibillion dollar investment in the business. It said the deal with OpenAI would involve deploying the company's artificial intelligence models across Microsoft products, which include the Bing search engine and its office software such as Word, PowerPoint and Outlook. ChatGPT, an artificial intelligence chatbot, has been a sensation since it launched in November, with users marvelling at its ability to perform a variety of tasks from writing recipes and sonnets to job applications. It is at the forefront of generative AI, or technology trained on vast amounts of text and images that can create content from a simple text prompt. It has also been described as "a gamechanger" that will challenge teachers in universities and schools amid concerns that pupils are already using the chatbot to write high-quality essays with minimal human input.


ChatGPT listed as author on research papers: many scientists disapprove

#artificialintelligence

The artificial-intelligence (AI) chatbot ChatGPT that has taken the world by storm has made its formal debut in the scientific literature -- racking up at least four authorship credits on published papers and preprints. Journal editors, researchers and publishers are now debating the place of such AI tools in the published literature, and whether it's appropriate to cite the bot as an author. Publishers are racing to create policies for the chatbot, which was released as a free-to-use tool in November by tech company OpenAI in San Francisco, California. AI bot ChatGPT writes smart essays -- should professors worry? ChatGPT is a large language model (LLM), which generates convincing sentences by mimicking the statistical patterns of language in a huge database of text collated from the Internet. The bot is already disrupting sectors including academia: in particular, it is raising questions about the future of university essays and research production.


Rentokil pilots facial recognition system as way to exterminate rats

The Guardian

The world's largest pest control group is piloting the use of facial recognition software as a way to exterminate rats in people's homes. Rentokil said it had been developing the technology alongside Vodafone for 18 months. The surveillance technology, which is already being tested in real homes, tracks the rodents' habits and streams real-time analysis using artificial intelligence. A central command centre can then help to decide where and how to kill the rats caught on camera. Rentokil's chief executive, Andy Ransom, told the Financial Times: "With facial recognition technology you can see that rat number one behaved differently from rat number three.


How ChatGPT Will Destabilize White-Collar Work - The Atlantic

#artificialintelligence

In the next five years, it is likely that AI will begin to reduce employment for college-educated workers. As the technology continues to advance, it will be able to perform tasks that were previously thought to require a high level of education and skill. This could lead to a displacement of workers in certain industries, as companies look to cut costs by automating processes. While it is difficult to predict the exact extent of this trend, it is clear that AI will have a significant impact on the job market for college-educated workers. It will be important for individuals to stay up to date on the latest developments in AI and to consider how their skills and expertise can be leveraged in a world where machines are increasingly able to perform many tasks.


Technical Perspective: Beautiful Symbolic Abstractions for Safe and Secure Machine Learning

Communications of the ACM

Over the last decade, machine learning has revolutionized entire areas of science ranging from drug discovery to autonomous driving, to medical diagnostics, to natural language processing and many others. Despite this impressive progress, it has become increasingly evident that modern machine learning models suffer from several issues which, if not resolved, could prevent their widespread adoption. Example challenges include lack of robustness guarantees to slight distribution shifts, reinforcing unfair bias present in training data, leakage of sensitive information through the model, and others. Addressing these issues by inventing new methods and tools for establishing that machine learning models enjoy certain desirable guarantees, is critical, especially for domains where safety and security are paramount. Indeed, over the last few years there has been substantial research progress in new techniques aiming to address the above issues with most work so far focusing on perturbations applied to inputs of the model.


Ethical AI is Not about AI

Communications of the ACM

Many scholars and educators argue the antidote to some of the ethical problems with artificial intelligence (AI) is to integrate ethics and AI or embed ethics in AI.2,12,14 The product of this combining is supposed to lead to Ethical AI, a term that is both frequently used and seemingly elusive.5,9,13 Although attempts to make AI ethical are to be lauded, too little attention has been given to what it means to "integrate" or "embed," be it integrating ethics and AI or embedding ethics in AI. A rather simple idea of additivity seems to be behind these proposals. That is, the efforts are directed toward figuring out how ethical principles can be "injected into"11 AI or how an ethical dimension can be "added to" machines1 or, if the focus is on the latest wave of machine learning, how to "teach" machines to act in an ethical way.10


Computational Linguistics Finds Its Voice

Communications of the ACM

Whether computers can actually "think" and "feel" is a question that has long fascinated society. Alan M. Turing introduced a test for gauging machine intelligence as early as 1950. Movies such as 2001: A Space Odyssey and Star Wars have only served to fuel these thoughts, but while the concept was once confined to science fiction, it is rapidly emerging as a serious topic of discussion. In a few cases, the dialog has become so convincing that people have deemed machines sentient. A recent example involves former Google data scientist Blake Lemoine, who published human-to-machine discussions with an AI system called LaMDA.a


This could lead to the next big breakthrough in common sense AI

#artificialintelligence

You’ve probably heard us say this countless times: GPT-3, the gargantuan AI that spews uncannily human-like language, is a marvel. It’s also largely a mirage. You can tell with a simple trick: Ask it the color of sheep, and it will suggest “black” as often as “white”—reflecting the phrase “black sheep” in our vernacular. That’s…


Huge AI models can be halved in size without degrading performance

New Scientist

Large artificial intelligence language models, like those used to run the popular ChatGPT chatbot, can be reduced in size by more than half without losing much accuracy. This could save large amounts of energy and allow people to run the models at home, rather than in huge data centres.


Tesla video promoting self-driving was staged, senior engineer testifies

The Guardian

A 2016 video that Tesla used to promote its self-driving technology was staged to show capabilities like stopping at a red light and accelerating at a green light that the system did not have, according to testimony by a senior engineer. The video, which remains archived on Tesla's website, was released in October 2016 and promoted on Twitter by Elon Musk as evidence that "Tesla drives itself". But the Model X was not driving itself with technology Tesla had deployed, Ashok Elluswamy, director of Autopilot software at Tesla, said in the transcript of a July deposition taken as evidence in a lawsuit against Tesla for a 2018 fatal crash involving a former Apple engineer. The previously unreported testimony by Elluswamy represents the first time a Tesla employee has confirmed and detailed how the video was produced. The video carries a tagline saying: "The person in the driver's seat is only there for legal reasons. He is not doing anything. The car is driving itself."