young user
Ethical AI for Young Digital Citizens: A Call to Action on Privacy Governance
Shouli, Austin, Barthwal, Ankur, Campbell, Molly, Shrestha, Ajay Kumar
The rapid expansion of Artificial Intelligence (AI) in digital platforms used by youth has created significant challenges related to privacy, autonomy, and data protection. While AI-driven personalization offers enhanced user experiences, it often operates without clear ethical boundaries, leaving young users vulnerable to data exploitation and algorithmic biases. This paper presents a call to action for ethical AI governance, advocating for a structured framework that ensures youth-centred privacy protections, transparent data practices, and regulatory oversight. We outline key areas requiring urgent intervention, including algorithmic transparency, privacy education, parental data-sharing ethics, and accountability measures. Through this approach, we seek to empower youth with greater control over their digital identities and propose actionable strategies for policymakers, AI developers, and educators to build a fairer and more accountable AI ecosystem.
- Europe (0.14)
- North America > Canada > British Columbia > Vancouver Island > Regional District of Nanaimo > Nanaimo (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.30)
Navigating AI to Unpack Youth Privacy Concerns: An In-Depth Exploration and Systematic Review
Shrestha, Ajay Kumar, Barthwal, Ankur, Campbell, Molly, Shouli, Austin, Syed, Saad, Joshi, Sandhya, Vassileva, Julita
This systematic literature review investigates perceptions, concerns, and expectations of young digital citizens regarding privacy in artificial intelligence (AI) systems, focusing on social media platforms, educational technology, gaming systems, and recommendation algorithms. Using a rigorous methodology, the review started with 2,000 papers, narrowed down to 552 after initial screening, and finally refined to 108 for detailed analysis. Data extraction focused on privacy concerns, data-sharing practices, the balance between privacy and utility, trust factors in AI, transparency expectations, and strategies to enhance user control over personal data. Findings reveal significant privacy concerns among young users, including a perceived lack of control over personal information, potential misuse of data by AI, and fears of data breaches and unauthorized access. These issues are worsened by unclear data collection practices and insufficient transparency in AI applications. The intention to share data is closely associated with perceived benefits and data protection assurances. The study also highlights the role of parental mediation and the need for comprehensive education on data privacy. Balancing privacy and utility in AI applications is crucial, as young digital citizens value personalized services but remain wary of privacy risks. Trust in AI is significantly influenced by transparency, reliability, predictable behavior, and clear communication about data usage. Strategies to improve user control over personal data include access to and correction of data, clear consent mechanisms, and robust data protection assurances. The review identifies research gaps and suggests future directions, such as longitudinal studies, multicultural comparisons, and the development of ethical AI frameworks.
- North America > Canada > British Columbia > Vancouver Island > Regional District of Nanaimo > Nanaimo (0.15)
- North America > Canada > Saskatchewan > Saskatoon (0.14)
- North America > United States (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- Research Report (1.00)
- Overview (1.00)
- Instructional Material (1.00)
- Information Technology > Security & Privacy (1.00)
- Education > Educational Setting > K-12 Education (0.68)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Communications > Social Media (1.00)
- (3 more...)
Meta revenue soars as it pivots to AI and announces dividends for investors
Meta shares soared 12% in after-hours trading following a strong fourth-quarter earnings report released the day after CEO Mark Zuckerberg took a beating in a contentious congressional hearing. The company also announced it will pay a 50 cent-per-share dividend to investors for the first time, and has authorized a 50bn share buyback program. Overall, Meta reported fourth-quarter revenue of 40.1bn, beating the predicted 39.18bn and up 25% year-over-year. The report comes as Meta, like many of its big tech peers, is seeking to integrate artificial intelligence tools into its core products. In a statement accompanying the report, Zuckerberg said Meta has "made a lot of progress on our vision for advancing AI and the metaverse".
- Law (0.78)
- Government (0.56)
- Information Technology (0.53)
The Download: Joy Buolamwini on AI, and Meta's beauty filter lawsuit
AI researcher and activist Joy Buolamwini is best known for a pioneering paper she co-wrote with Timnit Gebru in 2017 which exposed how commercial facial recognition systems often failed to recognize the faces of Black and brown people, especially Black women. Her research and advocacy led companies such as Google, IBM, and Microsoft to improve their software so it would be less biased and back away from selling their technology to law enforcement. Now, Buolamwini has a new target in sight. She is calling for a radical rethink of how AI systems are built. Buolamwini tells MIT Technology Review that, amid the current AI hype cycle, she sees a very real risk of letting technology companies pen the rules that apply to them-- repeating the very mistake that has previously allowed biased and oppressive technology to thrive.
- Information Technology (1.00)
- Law > Litigation (0.40)