intelligent technology
MS-Mix: Unveiling the Power of Mixup for Multimodal Sentiment Analysis
Zhu, Hongyu, Chen, Lin, El-Yacoubi, Mounim A., Shang, Mingsheng
Multimodal Sentiment Analysis (MSA) aims to identify and interpret human emotions by integrating information from heterogeneous data sources such as text, video, and audio. While deep learning models have advanced in network architecture design, they remain heavily limited by scarce multimodal annotated data. Although Mixup-based augmentation improves generalization in unimodal tasks, its direct application to MSA introduces critical challenges: random mixing often amplifies label ambiguity and semantic inconsistency due to the lack of emotion-aware mixing mechanisms. To overcome these issues, we propose MS-Mix, an adaptive, emotion-sensitive augmentation framework that automatically optimizes sample mixing in multimodal settings. The key components of MS-Mix include: (1) a Sentiment-Aware Sample Selection (SASS) strategy that effectively prevents semantic confusion caused by mixing samples with contradictory emotions. (2) a Sentiment Intensity Guided (SIG) module using multi-head self-attention to compute modality-specific mixing ratios dynamically based on their respective emotional intensities. (3) a Sentiment Alignment Loss (SAL) that aligns the prediction distributions across modalities, and incorporates the Kullback-Leibler-based loss as an additional regularization term to train the emotion intensity predictor and the backbone network jointly. Extensive experiments on three benchmark datasets with six state-of-the-art backbones confirm that MS-Mix consistently outperforms existing methods, establishing a new standard for robust multimodal sentiment augmentation. The source code is available at: https://github.com/HongyuZhu-s/MS-Mix.
- Asia > China > Chongqing Province > Chongqing (0.77)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Research Report (1.00)
- Overview (0.68)
- Information Technology > Artificial Intelligence > Natural Language > Information Extraction (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Discourse & Dialogue (0.87)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.86)
Assessing employment and labour issues implicated by using AI
Willems, Thijs, Hotan, Darion Jin, Tang, Jiawen Cheryl, Norhashim, Norakmal Hakim bin, Poon, King Wang, Goh, Zi An Galvyn, Vinod, Radha
This chapter critiques the dominant reductionist approach in AI and work studies, which isolates tasks and skills as replaceable components. Instead, it advocates for a systemic perspective that emphasizes the interdependence of tasks, roles, and workplace contexts. Two complementary approaches are proposed: an ethnographic, context-rich method that highlights how AI reconfigures work environments and expertise; and a relational task-based analysis that bridges micro-level work descriptions with macro-level labor trends. The authors argue that effective AI impact assessments must go beyond predicting automation rates to include ethical, well-being, and expertise-related questions. Drawing on empirical case studies, they demonstrate how AI reshapes human-technology relations, professional roles, and tacit knowledge practices. The chapter concludes by calling for a human-centric, holistic framework that guides organizational and policy decisions, balancing technological possibilities with social desirability and sustainability of work.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Asia > Singapore (0.05)
- Europe > Ireland > Leinster > County Dublin > Dublin (0.04)
- (2 more...)
Data, AI and automation will never replace humans. Fact - TechNative
We’ve all heard the scare stories. The availability of endless data will allow organisations to become less reliant on the human workforce. Artificial Intelligence (AI) is going to be smarter than humans. And automation will take away lots of our jobs. How much of this is really true though? Despite advances in these technologies, like conversational AI, they’re just tools to be used in the endeavour of making our lives easier and organisations more productive. But even a tool with contextual and conversational capabilities can’t provide the unique flexibility of human touch and true ingenuity that we all desire and
ChatGPT Stole Your Work. So What Are You Going to Do?
If you've ever uploaded photos or art, written a review, "liked" content, answered a question on Reddit, contributed to open source code, or done any number of other activities online, you've done free work for tech companies, because downloading all this content from the web is how their AI systems learn about the world. Tech companies know this, but they mask your contributions to their products with technical terms like "training data," "unsupervised learning," and "data exhaust" (and, of course, impenetrable "Terms of Use" documents). In fact, much of the innovation in AI over the past few years has been in ways to use more and more of your content for free. This is true for search engines like Google, social media sites like Instagram, AI research startups like OpenAI, and many other providers of intelligent technologies. This exploitative dynamic is particularly damaging when it comes to the new wave of generative AI programs like Dall-E and ChatGPT.
Postdoctoral Researcher: NOLAI Ethical Aspects of AI in Education
Are you a scientist with a keen interest in education, research and intelligent technologies? At the National Education Lab for Artificial Intelligence (NOLAI in Dutch), we develop innovative and intelligent technologies aimed at improving the quality of primary and secondary education. Over the next ten years, NOLAI teams up with schools, universities and companies to create new innovative examples of AI in education. As a postdoctoral researcher on ethical aspects of AI in education, you can contribute to NOLAI's goals in our scientific programme. The new National Education Lab AI (NOLAI), located at Radboud University in the Netherlands, is looking for a postdoctoral researcher to study the ethical and social implications of AI in education.
Auditing YouTube's Recommendation Algorithm for Misinformation Filter Bubbles
Srba, Ivan, Moro, Robert, Tomlein, Matus, Pecher, Branislav, Simko, Jakub, Stefancova, Elena, Kompan, Michal, Hrckova, Andrea, Podrouzek, Juraj, Gavornik, Adrian, Bielikova, Maria
In this paper, we present results of an auditing study performed over YouTube aimed at investigating how fast a user can get into a misinformation filter bubble, but also what it takes to "burst the bubble", i.e., revert the bubble enclosure. We employ a sock puppet audit methodology, in which pre-programmed agents (acting as YouTube users) delve into misinformation filter bubbles by watching misinformation promoting content. Then they try to burst the bubbles and reach more balanced recommendations by watching misinformation debunking content. We record search results, home page results, and recommendations for the watched videos. Overall, we recorded 17,405 unique videos, out of which we manually annotated 2,914 for the presence of misinformation. The labeled data was used to train a machine learning model classifying videos into three classes (promoting, debunking, neutral) with the accuracy of 0.82. We use the trained model to classify the remaining videos that would not be feasible to annotate manually. Using both the manually and automatically annotated data, we observe the misinformation bubble dynamics for a range of audited topics. Our key finding is that even though filter bubbles do not appear in some situations, when they do, it is possible to burst them by watching misinformation debunking content (albeit it manifests differently from topic to topic). We also observe a sudden decrease of misinformation filter bubble effect when misinformation debunking videos are watched after misinformation promoting videos, suggesting a strong contextuality of recommendations. Finally, when comparing our results with a previous similar study, we do not observe significant improvements in the overall quantity of recommended misinformation content.
- North America > United States > New York > New York County > New York City (0.05)
- Europe > Slovakia > Bratislava > Bratislava (0.05)
- North America > United States > Virginia (0.04)
- (7 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Media > News (1.00)
- Health & Medicine > Therapeutic Area > Immunology (0.47)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.92)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (0.67)
Artificial Intelligence is Indian Navy's new strategic frontline
In modern geo-politics the role of Indian Navy is going to be more challenging and its active participation could decide the place of India in global power play. The seminar "Swavlamban" chaired by the PM Modi on SPRINT Challenges on July 18th 2022, is showcasing the seriousness of New Delhi towards the strengthening the Indian Navy through the modern indigenous technologies. The presence of Chinses third generation research and survey ship "Yuan Wang 5" in Hambantota, Sri Lanka, is sufficient to explain that the Indo-Pacific is going to be future coliseum of geo-politics. It is provoking India to adopt modern cutting-edge naval technologies to protect the country's interest and control the foreign powers. Technology is always an important agent, which decides or redefines the war parameters with some distinctive outputs.
- Asia > Sri Lanka > Southern Province > Hambantŏṭa District > Hambantŏṭa (0.25)
- Asia > India > NCT > New Delhi (0.25)
- Indian Ocean (0.05)
- Asia > India > Uttarakhand > Dehradun (0.05)
- Government > Military > Navy (1.00)
- Government > Regional Government > Asia Government > India Government (0.84)
Is AI Really A Job Killer? These Experts Say No
If you believe all the doom and gloom in the news today, you might think automation and the deployment of AI-enabled systems at work will replace scores of jobs worldwide. Is AI Really a Job Killer? But management and technology experts Thomas Davenport and Steven Miller argue that AI is not a job destroyer -- no matter what other predictions might say. Yes, AI and intelligent technology will take over some jobs, but that will free up workers to do more challenging and important work. Tom and Steven recently completed a book on this topic called Working with AI: Real Stories of Human-Machine Collaboration, and I got the chance to speak with them about their predictions for how AI will fit in with the workplaces of the future.
- Information Technology > Communications > Social Media (0.52)
- Information Technology > Artificial Intelligence > Applied AI (0.36)
Maximizing Software Quality With Artificial Intelligence - AI Summary
But considering that an estimated 85% of AI projects fail to deliver on their goals, it's clear that many software development organizations are struggling to understand what skills actually help their teams harness the power of intelligent technologies. Today AI and ML are helping quality teams by ensuring that tests are only run when the application reaches the correct state, making sure that developers and testers can dedicate more time to fixing defects rather than investigating accidental failures. These advanced reporting features help QA teams efficiently identify small changes or errors –and ensure that anomalies are addressed before they lead to more severe issues. But as important as artificial intelligence and machine learning are to the future of software development and quality engineering, most QA professionals are too busy to become AI experts. To maximize their time, effort, and skillset, QA teams are better served by mastering key artificial intelligence and machine learning fundamentals that will enable them to start embracing advanced testing techniques and AI-based solutions as quickly as possible. But considering that an estimated 85% of AI projects fail to deliver on their goals, it's clear that many software development organizations are struggling to understand what skills actually help their teams harness the power of intelligent technologies.
How AI Can Make Strategy More Human
The power of AI is now within reach of all companies, opening up a new world of strategy innovation and enabling companies to leave the constraints of legacy architecture behind forever. Three new related high-potential strategies include: Forever Beta, Minimum Viable Idea (MVI), and Co-lab. This article explains each in detail, with examples of companies that are currently using them. Though their specific strategies are distinct, the companies share three important characteristics. First, their technology, business strategy, and execution are so closely intertwined as to be nearly indistinguishable. Second, humans — not machines — are in the driver’s seat. Third, these companies understand that all companies, no matter their industry, are now technology companies. But technology-driven business strategies require farseeing leaders. Those who are able to see opportunities at the new radically human nexus of people and technology will pre-empt disruption and seize the future.
- North America > United States > New York (0.05)
- Europe > United Kingdom (0.05)
- Asia > Middle East > UAE > Dubai Emirate > Dubai (0.05)