Goto

Collaborating Authors

 2023-08


AI images are getting harder to spot. Google thinks it has a solution.

Washington Post - Technology News

Microsoft has started a coalition of tech companies and media companies to develop a common standard for watermarking AI images, and the company has said it is researching new methods to track AI images. The company also places a small visible watermark in the corner of images generated by its AI tools. OpenAI, whose Dall-E image generator helped kick off the wave of interest in AI last year, also adds a visible watermark. AI researchers have suggested ways of embedding digital watermarks that the human eye can't see but can be identified by a computer.


How long until a robot is doing your dishes?

BBC News

"You cannot put a robot in an unstructured environment and then ask it to move around without basically destroying things. It's too much for technology to ask at this moment of time," says Prof Alireza Mohammadi, who established the Robotic Motion Intelligence Lab at the University of Michigan-Dearborn.


'A real opportunity': how ChatGPT could help college applicants

The Guardian

Chatter about artificial intelligence mostly falls into three basic categories: anxious uncertainty (will it take our jobs?); In this hazy, liminal, pre-disruption moment, there is little consensus as to whether generative AI is a tool or a threat, and few rules for using it properly. For students, this uncertainty feels especially profound. Bans on AI and claims that using it constitutes cheating are now giving way to concerns that AI use is inevitable and probably should be taught in school. Now, as a new college admissions season kicks into gear, many prospective applicants are wondering: can AI write my personal essay?


New York Times, CNN and Australia's ABC block OpenAI's GPTBot web crawler from accessing content

The Guardian > Technology

News outlets including the New York Times, CNN, Reuters and the Australian Broadcasting Corporation (ABC) have blocked a tool from OpenAI, limiting the company's ability to continue accessing their content. OpenAI is behind one of the best known artificial intelligence chatbots, ChatGPT. Its web crawler – known as GPTBot – may scan webpages to help improve its AI models. The Verge was first to report the New York Times had blocked GPTBot on its website. The Guardian subsequently found that other major news websites, including CNN, Reuters, the Chicago Tribune, the ABC and Australian Community Media (ACM) brands such as the Canberra Times and the Newcastle Herald, appear to have also disallowed the web crawler.


ChatGPT gets better marks than students in some university courses

New Scientist - News

ChatGPT may be as good as or better than students at assessments in around a quarter of university courses. However, this generally only applies to questions with a clear answer that require memory recall, rather than critical analysis. Yasir Zaki and his team at New York University Abu Dhabi in the United Arab Emirates contacted colleagues in other departments asking them to provide assessment questions from courses taught at the university, including computer science, psychology, political science and business. These colleagues also provided real student answers to the questions. The questions were then run through the artificial intelligence chatbot ChatGPT, which supplied its own responses.


Dynamic Certification for Autonomous Systems

Communications of the ACM

While gridworlds represent rather simplistic modules, they are quite powerful in demonstrating scalable behavior. Simply, an agent that fails to behave safely in such simple environments is also unlikely to behave safely in the real world.26 A parametric MDP can model the composition of these three modules into a single socio-technical system. The UAV can land and take off from anywhere in the region. It will lose connection and land-in-place with probability p1 (opaque UAV in Figure 2) and remain grounded until it reestablishes connection with probability p2.


The Smallness of Large Language Models

Communications of the ACM

After an initial period of enthusiasm, attitudes toward generative AI (embodied as GPT) have soured. A flurry of polls revealed the shift in mood. One showed 70% of respondents had little or no trust that GPT can provide accurate information. Respondents see great dangers to society from misinformation that cannot be detected, and they fear that when GPT is put into search engine interfaces, reliable fact checking will be impossible. Another poll showed 70% wanted to see some kind of regulation or ban on commercial rollout to allow time to head off the dangers.


Kids Are Going Back to School. So Is ChatGPT

WIRED

Last winter, the unveiling of OpenAI's alarmingly sophisticated chatbot sent educators into a tailspin. Generative AI, it was feared, would enable rampant cheating and plagiarism, and even make high school English obsolete. Universities debated updating plagiarism policies. Some school districts outright banned ChatGPT from their networks. Now, a new school year presents new challenges--and, for some, new opportunities.


The fascinating evolution of typing Chinese characters

MIT Technology Review

In August 1983, exactly 40 years ago, a Chinese engineer named Wang Yongmin developed the first popular way to input Chinese characters into a computer: Wubi. He did it by breaking down a Chinese character into different strokes and assigning several strokes to each letter on the QWERTY keyboard. For example, the Chinese character for dog, 犬, has several shapes in it: 犬, 一, 丿, and丶.These shapes were matched with the keys D, G, T, and Y, respectively. So when a user typed "DGTY," a Wubi input software would match that to the character 犬. Wubi was able to match every Chinese character with a keystroke combination using at maximum four QWERTY keys.


Large language models may speed drug discovery

MIT Technology Review

Computational models have been a major time saver when it comes to predicting which protein molecules could make effective drugs, but many of those methods themselves take a lot of time and computing power. Now researchers at MIT and Tufts have devised an alternative approach based on an algorithm known as a large language model, which can figure out which words (or, in this case, amino acids) are most likely to appear together. The model can match target proteins and potential drug molecules without the computationally intensive step of calculating each protein's 3D structure from its amino acid sequence. The resulting system can screen more than 100 million drug-protein pairs in a single day. The researchers tested their model by screening a library of about 4,700 candidate drug molecules for their ability to bind to a set of 51 enzymes.