páper
The LA Times published an op-ed warning of AI's dangers. It also published its AI tool's reply
Beneath a recent Los Angeles Times opinion piece about the dangers of artificial intelligence, there is now an AI-generated response about how AI will make storytelling more democratic. "Some in the film world have met the arrival of generative AI tools with open arms. We and others see it as something deeply troubling on the horizon," the co-directors of the Archival Producers Alliance, Rachel Antell, Stephanie Jenkins and Jennifer Petrucelli, wrote on 1 March. Published over the Academy Awards weekend, their comment piece focused on the specific dangers of AI-generated footage within documentary film, and the possibility that unregulated use of AI could shatter viewers' "faith in the veracity of visuals". On Monday, the Los Angeles Times's just-debuted AI tool, "Insight", labeled this argument as politically "center-left" and provided four "different views on the topic" underneath.
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
New York Times Says OpenAI Erased Potential Lawsuit Evidence
This week, the Times alleged that OpenAI's engineers inadvertently erased data the paper's team spent more than 150 hours extracting as potential evidence. OpenAI was able to recover much of the data, but the Times' legal team says it's still missing the original file names and folder structure. According to a declaration filed to the court Wednesday by Jennifer B. Maisel, a lawyer for the newspaper, this means the information "cannot be used to determine where the news plaintiffs' copied articles" may have been incorporated into OpenAI's artificial intelligence models. "We disagree with the characterizations made and will file our response soon," OpenAI spokesperson Jason Deutrom told WIRED in a statement. The New York Times declined to comment.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Scientist use 6-month-old baby named Sam to teach AI how humanity develops - amid fears tech could destroy us
Scientists trained an AI through the eyes of a baby in an effort to teach the tech how humanity develops - amid fears it could destroy us. Researchers at New York University strapped a headcam recorder to Sam when he was just six months old through his second birthday. The footage of 250,000 words and corresponding images was fed to an AI model, which learned how to recognize different objects similar to how Sam did. The AI developed its knowledge in the same way the child did - by observing the environment, listening to nearby people and connecting dots between what was seen and heard. The experiment also determined the connection between visual and linguistic representation in the development of a child.
GitHub - microsoft/JARVIS: JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
This project is under construction and we will have all the code ready soon. Language serves as an interface for LLMs to connect numerous AI models for solving complicated AI tasks! We introduce a collaborative system that consists of an LLM as the controller and numerous expert models as collaborative executors (from HuggingFace Hub). However, it means that Jarvis is restricted to models running stably on HuggingFace Inference Endpoints. Now you can access Jarvis' services by the Web API.
Machine learning identifies first British fossil of therizinosaur dinosaur
Teeth found in Oxfordshire, Gloucestershire and Dorset are believed to belong to the maniraptorans, a group of dinosaurs, including Velociraptor, which include birds as their closest relatives. These dinosaurs evolved into numerous species during the Middle Jurassic, but because fossils during this time are scarce, knowledge of their origins are scarce too. Researchers from the Natural History Museum and Birkbeck College used pioneering machine learning techniques to train computer models to identify the mystery teeth, which push back the origin of some of the group's members by almost 30 million years. Simon Wills, a Ph.D. student at the Natural History Museum who led the research, says, "Previous research had suggested that the maniraptorans were around in the Middle Jurassic, but the actual fossil evidence was patchy and disputed. Along with fossils found elsewhere, this research suggests the group had already achieved a global distribution by this time."
- Europe > United Kingdom > England > Oxfordshire (0.26)
- Europe > United Kingdom > England > Gloucestershire (0.26)
Papers with Code - Papers With Code : Trends
Frameworks: Repositories are classified by framework by inspecting the contents of every GitHub repository and checking for imports in the code. We limit to repositories that are implementations of papers. The date axis is the date the repository was created (NOTE: pytorch/tf support might have been added later - which explains why some repositories originally started in 2014/2015 are marked as pytorch/tf). Code Availability: For every open access machine learning paper, we check whether a code implementation is available on GitHub. The date axis is the publication date of the paper.
Researchers spot origins of stereotyping in AI language technologies
A team of researchers has identified a set of cultural stereotypes that are introduced into artificial intelligence models for language early in their development--a finding that adds to our understanding of the factors that influence results yielded by search engines and other AI-driven tools. "Our work identifies stereotypes about people that widely used AI language models pick up as they learn English. The models we're looking at, and others like them for other languages, are the building blocks of most modern language technologies, from translation systems to question-answering personal assistants to industry tools for resume screening, highlighting the real danger posed by the use of these technologies in their current state," says Sam Bowman, an assistant professor at NYU's Department of Linguistics and Center for Data Science and the paper's senior author. "We expect this effort and related projects will encourage future research towards building more fair language processing systems." The work dovetails with recent scholarship, such as Safiya Umoja Noble's "Algorithms of Oppression: How Search Engines Reinforce Racism" (NYU Press, 2018), which chronicles how racial and other biases have plagued widely used language technologies.
The practical application of 'Thinking' Artificial Intelligence
The power of AI – providing simple solutions to complex business problems Fountech design, develop and integrate AI into the core of your business, often by releasing the untapped potential of Big Data. Sometimes that's data you'll already have, sometimes we'll enable you to find it. We regard ourselves as an AI think-tank, rather than just a development company. Our approach can turn your business ideas into tangible results using targeted AI. That's why our core philosophy is: 'you don't just learn Artificial Intelligence - you need to think it'.
AI in drug discovery is overhyped: examples from AstraZeneca, Harvard, Stanford and Insilico…
Investments in AI for drug discovery are surging. Big Pharmas are throwing big bucks. Sanofi signed a 300 Million dollars deal with the startup Exscientia, and GSK did the same for 42 Million dollars. The Silicon Valley VC firm Andreessen Horowitz launched a new 450 Million dollars bio investment fund, with one focus area in applications of AI to drug discovery. In this craze, lots of pharma/biotech companies and investors wonder whether they should jump on the bandwagon in 2018, or wait and see.
[D] High Dimensional Spaces, Deep Learning and Adversarial Examples is this paper any good? Thoughts? • r/MachineLearning
This paper provides a useful theoretical underpinning to a field that has had very little theoretical study. It's not groundbreaking, but it's useful. The authors try to make stronger claims than they should in the intro/conclusion that might put the reader off from the paper, and that's unfortunate. The biggest new useful theoretical result is the discussion of the surface area vs volume of the adversarial subspace. They also echo some comments from other work on possible future defense strategies.