Goto

Collaborating Authors

 Egypt


Visualising AI spending: How does it compare with history's mega projects?

Al Jazeera

Visualising AI spending: How does it compare with history's mega projects? World leaders and tech executives are convening in New Delhi for the India-AI Impact Summit 2026, focusing on the role of artificial intelligence in governance, job disruption and global collaboration. However, behind these discussions lies the financial reality. Over the past decade, AI has drawn one of the largest waves of private investment in modern history, totalling trillions of dollars. According to Gartner, a United States-based business and technology insights company, worldwide spending on AI is forecast to total $2.5 trillion in 2026, a 44 percent increase over 2025.


Tech billionaires fly in for Delhi AI expo as Modi jostles to lead in south

The Guardian

Campaigners fear Narendra Modi could use AI to increase state surveillance and sway elections. Campaigners fear Narendra Modi could use AI to increase state surveillance and sway elections. Silicon Valley tech billionaires will land in Delhi this week for an AI summit hosted by India's prime minister, Narendra Modi, where leaders of the global south will wrestle for control over the fast-developing technology. During the week-long AI Impact Summit, attended by thousands of tech executives, government officials and AI safety experts, tech companies valued at trillions of dollars will rub along with leaders of countries such as Kenya and Indonesia, where average wages dip well below $1,000 a month. Amid a push to speed up AI adoption across the globe, Sundar Pichai, Sam Altman and Dario Amodei, the heads of Google, OpenAI and Anthropic, will all be there.





A Training Examples

Neural Information Processing Systems

Market research indicates that there is a significant opportunity for a new co ee bar located in the heart of the downtown business district.





Language Model Tokenizers Introduce Unfairness Between Languages

Neural Information Processing Systems

Recent language models have shown impressive multilingual performance, even when not explicitly trained for it. Despite this, there are concerns about the quality of their outputs across different languages. In this paper, we show how disparity in the treatment of different languages arises at the tokenization stage, well before a model is even invoked. The same text translated into different languages can have drastically different tok-enization lengths, with differences up to 15 times in some cases. These disparities persist even for tokenizers that are intentionally trained for multilingual support.