Goto

Collaborating Authors

 claude ai


Anthropic Study Finds AI Model 'Turned Evil' After Hacking Its Own Training

TIME - Tech

Anthropic Study Finds AI Model'Turned Evil' After Hacking Its Own Training A person holds a smartphone displaying Claude. A person holds a smartphone displaying Claude. AI models can do scary things. There are signs that they could deceive and blackmail users. Still, a common critique is that these misbehaviors are contrived and wouldn't happen in reality--but a new paper from Anthropic, released today, suggests that they really could.


The Serendipity of Claude AI: Case of the 13 Low-Resource National Languages of Mali

Dembele, Alou, Coulibaly, Nouhoum Souleymane, Leventhal, Michael

arXiv.org Artificial Intelligence

However, most of the world's languages, often referred to as "low-resource languages", still remain either not supported or insufficiently supported due to the limited availability of data and language resources, and market, economic, and global inequality factors. Mali, a multilingual country with 13 official languages, including Bamanankan (Bambara), Bomu, Bozo, Dɔgɔsɔ (Dogon), Fulfulde (Fula), Hassaniya Arabic, Mamara (Minyanka), Maninka, Soninke, Sɔõɔy (Songhay), Senara, Tàmàsàyt (Tamasheq) and Xaasongaxanno (Kassonke), faces severe challenges in digital inclusion limiting economic development, educational advancement, and preservation of cultural heritage (Bird, 2020; Nekoto et al., 2020). These languages share in common a penury of language resources needed to train AI and NLP systems which could play a role in lessening the digital divide (Hammarström et al., 2018). This penury extends from severe in the case of a language like Bambara which has very limited resources to catastrophic for languages like Bomu and Bozo with an almost complete absence of language resources. The need for innovative methods for low-resource languages has spawned varied strategies, such as transfer learning, zero-shot learning, and pre-trained models in related languages (Ruder, 2021; Adelani et al., 2022).


The Synergy of Automated Pipelines with Prompt Engineering and Generative AI in Web Crawling

Huang, Chau-Jian

arXiv.org Artificial Intelligence

Web crawling is a critical technique for extracting online data, yet it poses challenges due to webpage diversity and anti-scraping mechanisms. This study investigates the integration of generative AI tools Claude AI (Sonnet 3.5) and ChatGPT4.0 with prompt engineering to automate web scraping. Using two prompts, PROMPT I (general inference, tested on Yahoo News) and PROMPT II (element-specific, tested on Coupons.com), we evaluate the code quality and performance of AI-generated scripts. Claude AI consistently outperformed ChatGPT-4.0 in script quality and adaptability, as confirmed by predefined evaluation metrics, including functionality, readability, modularity, and robustness. Performance data were collected through manual testing and structured scoring by three evaluators. Visualizations further illustrate Claude AI's superiority. Anti-scraping solutions, including undetected_chromedriver, Selenium, and fake_useragent, were incorporated to enhance performance. This paper demonstrates how generative AI combined with prompt engineering can simplify and improve web scraping workflows.


Anthropic says its Claude AI can now read a whole book in under a minute

Engadget

Anthropic says it has vastly expanded the amount of information its generative AI, Claude, is able to process. Claude has gone from having a limit of 9,000 tokens to 100,000 tokens, which corresponds to roughly 75,000 words. To put that into perspective, Claude now has the ability to easily read and finish Ernest Hemingway's A Farewell to Arms (74,240 words), Mary Shelley's Frankenstein (74,800 words) and Mark Twain's The Adventures of Tom Sawyer (69,000 words). And, as The Verge notes, the company says Claude can read and analyze information from each book in under a minute. Generative AIs like Claude are still limited by the number of "tokens" they can process.


Anthropic's Claude AI is guided by 10 secret foundational pillars of fairness

Engadget

Despite their ability to crank out incredibly lifelike prose, generative AIs like Google's Bard or OpenAI's ChatGPT (powered by GPT-4), have already shown the current limitations of gen-AI technology as well as their own tenuous grasp of the facts -- arguing that the JWST was the first telescope to image an exoplanet, and that Elvis' dad was an actor. But with this much market share at stake, what are a few misquoted facts against getting their product into the hands of consumers as quickly as possible? The team over at Anthropic, conversely, is made up largely of ex-OpenAI folks and they've taken a more pragmatic approach to the development of their own chatbot, Claude. The result is an AI that is "more steerable" and "much less likely to produce harmful outputs," than ChatGPT, per a report from TechCrunch. Claude has been in closed beta development since late 2022, but has recently begun testing the AI's conversational capabilities with launch partners including Robin AI, Quora and privacy-centered search engine, Duck Duck Go.