renaissance
Trajectories of Change: Approaches for Tracking Knowledge Evolution
Schlattmann, Raphael, Vogl, Malte
We explore local vs. global evolution of knowledge systems through the framework of socio-epistemic networks (SEN), applying two complementary methods to a corpus of scientific texts. The framework comprises three interconnected layers-social, semiotic (material), and semantic-proposing a multilayered approach to understanding structural developments of knowledge. To analyse diachronic changes on the semantic layer, we first use information-theoretic measures based on relative entropy to detect semantic shifts, assess their significance, and identify key driving features. Second, variations in document embedding densities reveal changes in semantic neighbourhoods, tracking how concentration of similar documents increase, remain stable, or disperse. This enables us to trace document trajectories based on content (topics) or metadata (authorship, institution). Case studies of Joseph Silk and Hans-J\"urgen Treder illustrate how individual scholar's work aligns with broader disciplinary shifts in general relativity and gravitation research, demonstrating the applications, limitations, and further potential of this approach.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (17 more...)
Renaissance of Literate Programming in the Era of LLMs: Enhancing LLM-Based Code Generation in Large-Scale Projects
Zhang, Wuyang, Li, Yansong, Dong, Zeyu, Wu, Yu, Zhou, Yingyao, Wang, Duolei, Xing, Songsirou, Zhou, Chichun, Shen, Da
Large Language Models (LLMs) have helped programmers increase efficiency through code generation, comprehension, and repair. However, their application to large-scale projects remains challenging due to complex interdependencies and the extensive size of modern codebases. Although Knuth's concept of Literate Programming (LP) combines code and natural language to convey logic and intent, its potential for enhancing relationships in large projects has not been fully explored. In this study, we introduce the idea of Interoperable LP (ILP), which leverages literate programming principles to enhance the development of both small-scale documents and large-scale projects with LLMs. We investigate how LLMs perform under ILP-style instructions for both document-oriented tasks and entire projects. Recognizing that many researchers rely on well-structured templates to guide LLMs, we propose a concise prompt engineering method to write LP documents so LLMs can better be involved in code generation. We also examine the capacity of various LLMs to generate Scheme and Python code on the RepoBench benchmark, illustrating the advantages of our approach. Our findings indicate that ILP with LLMs can enhance LLM-based code generation in large-scale project development.
- North America > United States > Massachusetts (0.04)
- North America > United States > New York > Suffolk County > Stony Brook (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (3 more...)
Renaissance: Investigating the Pretraining of Vision-Language Encoders
Fields, Clayton, Kennington, Casey
In the past several years there has been an explosion of available models for vision-language tasks. Unfortunately, the literature still leaves open a number of questions related to best practices in designing and training such models. In this paper we seek to answer several questions related to the pretraining of vision-language encoders through meta-analysis. In our first set of experiments, we show that we can save significant compute at no cost to downstream performance, by freezing large parts of vision-language models during pretraining. In our second set of experiments we examine the effect of basing a VL transformer on a vision model versus a text model. Additionally, we introduce a VL modeling platform called Renaissance that we use to conduct all of the experiments. This program offers a great deal of flexibility in creating, training and evaluating transformer encoders for VL modeling. The source code for Renaissance can be found at https://github.com/bsu-slim/renaissance.
- North America > United States > Idaho > Ada County > Boise (0.05)
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Sensing and Signal Processing > Image Processing (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Deep Ensemble Art Style Recognition
Menis-Mastromichalakis, Orfeas, Sofou, Natasa, Stamou, Giorgos
The massive digitization of artworks during the last decades created the need for categorization, analysis, and management of huge amounts of data related to abstract concepts, highlighting a challenging problem in the field of computer science. The rapid progress of artificial intelligence and neural networks has provided tools and technologies that seem worthy of the challenge. Recognition of various art features in artworks has gained attention in the deep learning society. In this paper, we are concerned with the problem of art style recognition using deep networks. We compare the performance of 8 different deep architectures (VGG16, VGG19, ResNet50, ResNet152, Inception-V3, DenseNet121, DenseNet201 and Inception-ResNet-V2), on two different art datasets, including 3 architectures that have never been used on this task before, leading to state-of-the-art performance. We study the effect of data preprocessing prior to applying a deep learning model. We introduce a stacking ensemble method combining the results of first-stage classifiers through a meta-classifier, with the innovation of a versatile approach based on multiple models that extract and recognize different characteristics of the input, creating a more consistent model compared to existing works and achieving state-of-the-art accuracy on the largest art dataset available (WikiArt - 68,55%). We also discuss the impact of the data and art styles themselves on the performance of our models forming a manifold perspective on the problem.
- Europe > Greece > Attica > Athens (0.04)
- North America > Canada > British Columbia > East Kootenay Region > Fernie (0.04)
The Terrible Twenties? The Assholocene? What to Call Our Chaotic Era
In the winter of 2020, on one of my aimless, frigid quarantine walks around my silent neighborhood, I remember being struck by a thought: did a medieval European peasant know that he was living through what is now widely known as the Dark Ages? Was there some moment when he leaned against his hoe in the fields, gazed up at the uncaring sky, and dimly perceived that he was unlucky enough to have been born into a bad century, perhaps even a bad millennium, too late for classical antiquity and too early for the Renaissance? I was sympathetic toward that notional peasant, because I was feeling the same way. The tide of history was overwhelming; I was minuscule, my life brought to a terrifying standstill by an airborne virus. I thought that if the humans who survived into the year 2500 looked back on my era, they would see it as cursed or benighted, the beginning of a downward slide.
- North America > United States > California > Los Angeles County > Los Angeles (0.05)
- Europe > Ukraine (0.05)
- Europe > Russia (0.05)
- (3 more...)
Has Great Potential! Meet Your A.I. Realtor
The spectre of artificial intelligence is worrying lots of workers, but one office is welcoming it with open arms and an apple pie in the oven. "There are many people who, at 2 a.m., are on their phones, looking at what's on the market," Fredrik Eklund, of the real-estate agency the Eklund Gomes Team, said the other day. He sat in the reception area of his Flatiron office wearing a pale-pink blazer, jeans, and thick black-framed eyeglasses. "Now they can talk to Maya. Her shop is open 24/7, and she is always there."
- North America > United States > New York (0.08)
- North America > United States > California > Los Angeles County > Long Beach (0.05)
RenAIssance: A Survey into AI Text-to-Image Generation in the Era of Large Model
Bie, Fengxiang, Yang, Yibo, Zhou, Zhongzhu, Ghanem, Adam, Zhang, Minjia, Yao, Zhewei, Wu, Xiaoxia, Holmes, Connor, Golnari, Pareesa, Clifton, David A., He, Yuxiong, Tao, Dacheng, Song, Shuaiwen Leon
Text-to-image generation (TTI) refers to the usage of models that could process text input and generate high fidelity images based on text descriptions. Text-to-image generation using neural networks could be traced back to the emergence of Generative Adversial Network (GAN), followed by the autoregressive Transformer. Diffusion models are one prominent type of generative model used for the generation of images through the systematic introduction of noises with repeating steps. As an effect of the impressive results of diffusion models on image synthesis, it has been cemented as the major image decoder used by text-to-image models and brought text-to-image generation to the forefront of machine-learning (ML) research. In the era of large models, scaling up model size and the integration with large language models have further improved the performance of TTI models, resulting the generation result nearly indistinguishable from real-world images, revolutionizing the way we retrieval images. Our explorative study has incentivised us to think that there are further ways of scaling text-to-image models with the combination of innovative model architectures and prediction enhancement techniques. We have divided the work of this survey into five main sections wherein we detail the frameworks of major literature in order to delve into the different types of text-to-image generation methods. Following this we provide a detailed comparison and critique of these methods and offer possible pathways of improvement for future work. In the future work, we argue that TTI development could yield impressive productivity improvements for creation, particularly in the context of the AIGC era, and could be extended to more complex tasks such as video generation and 3D generation.
In Defense of Humanity
On July 13, 1833, during a visit to the Cabinet of Natural History at the Jardin des Plantes, in Paris, Ralph Waldo Emerson had an epiphany. Peering at the museum's specimens--butterflies, hunks of amber and marble, carved seashells--he felt overwhelmed by the interconnectedness of nature, and humankind's place within it. Check out more from this issue and find your next story to read. The experience inspired him to write "The Uses of Natural History," and to articulate a philosophy that put naturalism at the center of intellectual life in a technologically chaotic age--guiding him, along with the collective of writers and radical thinkers known as transcendentalists, to a new spiritual belief system. Through empirical observation of the natural world, Emerson believed, anyone could become "a definer and map-maker of the latitudes and longitudes of our condition"--finding agency, individuality, and wonder in a mechanized age. America was crackling with invention in those years, and everything seemed to be speeding up as a result.
- North America > United States > Massachusetts > Middlesex County > Concord (0.04)
- North America > United States > California (0.04)
- Europe (0.04)
- Law (0.47)
- Information Technology (0.47)
Crossing The Threshold Into The AI Renaissance
GPT Summary: The rapid advancements in artificial intelligence (AI) have brought humanity to a critical juncture, similar to the Renaissance of the 14th-17th centuries. AI is experiencing a functional rebirth, with machines surpassing human performance in various cognitive tasks. These developments raise philosophical questions about the nature of human intelligence and our roles in a world where AI is omnipresent. Striking the right balance between innovation and regulation is crucial, as ethical concerns need addressing. By exploring AI's function and philosophical aspects, we can harness its power to enhance our lives, create new opportunities, and unlock the next renaissance in human-machine collaboration.
Working with Projective Geometry part1(Machine Learning)
Abstract: We show how the birth of perspective painting in the Italian Renaissance led to a new way of interpreting space that resulted in the creation of projective geometry. Unlike other works on this subject, we explicitly show how the craft of the painters implied the introduction of new points and lines (points and lines at infinity) and their projective coordinates to complete the Euclidean space to what is now called projective space. Abstract: Many algorithms used are based on geometrical computation. There are several criteria in selecting appropriate algorithm from already known. Recently, the fastest algorithms have been preferred.