Goto

Collaborating Authors

 DiSanto, Nick


Transcending the Attention Paradigm: Representation Learning from Geospatial Social Media Data

arXiv.org Artificial Intelligence

While transformers have pioneered attention-driven architectures as a cornerstone of language modeling, their dependence on explicitly contextual information underscores limitations in their abilities to tacitly learn overarching textual themes. This study challenges the heuristic paradigm of performance benchmarking by investigating social media data as a source of distributed patterns. In stark contrast to networks that rely on capturing complex long-term dependencies, models of online data inherently lack structure and are forced to detect latent structures in the aggregate. To properly represent these abstract relationships, this research dissects empirical social media corpora into their elemental components, analyzing over two billion tweets across population-dense locations. We create Bag-of-Word embedding specific to each city and compare their respective representations. This finds that even amidst noisy data, geographic location has a considerable influence on online communication, and that hidden insights can be uncovered without the crutch of advanced algorithms. This evidence presents valuable geospatial implications in social science and challenges the notion that intricate models are prerequisites for pattern recognition in natural language. This aligns with the evolving landscape that questions the embrace of absolute interpretability over abstract understanding and bridges the divide between sophisticated frameworks and intangible relationships.


Beyond Interpretable Benchmarks: Contextual Learning through Cognitive and Multimodal Perception

arXiv.org Artificial Intelligence

With state-of-the-art models achieving high performance on standard benchmarks, contemporary research paradigms continue to emphasize general intelligence as an enduring objective. However, this pursuit overlooks the fundamental disparities between the high-level data perception abilities of artificial and natural intelligence systems. This study questions the Turing Test as a criterion of generally intelligent thought and contends that it is misinterpreted as an attempt to anthropomorphize computer systems. Instead, it emphasizes tacit learning as a cornerstone of general-purpose intelligence, despite its lack of overt interpretability. This abstract form of intelligence necessitates contextual cognitive attributes that are crucial for human-level perception: generalizable experience, moral responsibility, and implicit prioritization. The absence of these features yields undeniable perceptual disparities and constrains the cognitive capacity of artificial systems to effectively contextualize their environments. Additionally, this study establishes that, despite extensive exploration of potential architecture for future systems, little consideration has been given to how such models will continuously absorb and adapt to contextual data. While conventional models may continue to improve in benchmark performance, disregarding these contextual considerations will lead to stagnation in human-like comprehension. Until general intelligence can be abstracted from task-specific domains and systems can learn implicitly from their environments, research standards should instead prioritize the disciplines in which AI thrives.