Higher Education
The tasks college students are using Claude AI for most, according to Anthropic
For better or worse, AI tools have steadily become a reality of the academic landscape since ChatGPT launched in late 2022. Anthropic is studying what that looks like in real time. On Tuesday, shortly after launching Claude for Education, the company released data on which tasks university students use its AI chatbot Claude for and which majors use it the most. Using Clio, the company's data analysis tool, to maintain user privacy, Anthropic analyzed 574,740 anonymized conversations between Claude and users at the Free and Pro tiers with higher education email addresses. All conversations appeared to relate to coursework.
OpenAI is offering free ChatGPT Plus for college students
OpenAI is offering two months of free ChatGPT Plus to all college students, as CEO Sam Altman recently announced ahead of a much-anticipated update to the AI chatbot. The offer is available through May for U.S. and Canadian students only, and can be claimed on the ChatGPT student landing page. According to the site, Existing ChatGPT Plus subscribers and new students will be verified through a system called SheerID to confirm current enrollment. Make note: the subscription will automatically renew at the ChatGPT Plus monthly rate ( 20) if not cancelled before the two months are up. The paid version of ChatGPT includes extended limits on chatting, file uploads, and image generation, as well as advanced voice mode with video and screen sharing, limited Sora access, and new GPTโ4o and o3โmini models.
OpenAI's 20 ChatGPT Plus is now free for college students until the end of May
Following the release of rival Anthropic's Claude for Education, OpenAI has announced that its 20 ChatGPT Plus tier will be free for college students until the end of May. The offer comes just in time for final exams and will provide features like OpenAI's most advanced LLM, GPT-4o and an all-new image generation tool. "We are offering a Plus discount for students on a limited-time basis in the US and Canada," the company wrote in a FAQ. "This is an experimental consumer program and we may or may not expand this to more schools and countries over time." On top of the aforementioned features, ChatGPT Plus will offer students benefits like priority access during peak usage times and higher message limits.
Rejected by 16 colleges, hired by Google. Now he's suing some of the schools for anti-Asian discrimination
Stanley Zhong had a 4.42 grade point average, a nearly perfect SAT score, had bested adults in competitive coding competitions and started his own electronic signing service all while still in high school. When it came time to apply to colleges, Zhong's family wasn't overly concerned about his prospects even amid an increasingly competitive admissions environment. But, by the end of his senior year in Palo Alto in 2023, Zhong received rejection letters to 16 of the 18 colleges where he applied, including five University of California campuses that his father had figured would be safety schools. "It was surprise upon surprise upon surprise, and then it turned into frustration and, eventually, anger," his father, Nan Zhong, told The Times in a recent interview. "And I think both Stanley and I felt the same way, that something is really funky here."
Brown University student angers non-faculty employees by asking 'what do you do all day,' faces punishment
Alex Shieh is a student at Brown University. He is making waves and facing charges for asking the school's non-faculty employees what they do all day. A sophomore at Brown University is facing the school's wrath after he sent a DOGE-like email to non-faculty employees asking them what they do all day to try to figure out why the elite school's tuition has gotten so expensive. "The inspiration for this is the rising cost of tuition," Alex Shieh told Fox News Digital in an interview. "Next year, it's set to be 93,064 to go to Brown," Shieh said of the Ivy League university.
How and why parents and teachers are introducing young children to AI
Since the release of ChatGPT in late 2022, generative artificial intelligence has trickled down from adults in their offices to university students in campus libraries to teenagers in high school hallways. Now it's reaching the youngest among us, and parents and teachers are grappling with the most responsible way to introduce their under-13s to a new technology that may fundamentally reshape the future. Though the terms of service for ChatGPT, Google's Gemini and other AI models specify that the tools are only meant for those over 13, parents and teachers are taking the matter of AI education into their own hands. Inspired by a story we published on parents who are teaching their children to use AI to set them up for success in school and at work, we asked Guardian readers how and why โ or why not โ others are doing the same. Though our original story only concerned parents, we have also included teachers in the responses published below, as preparing children for future studies and jobs is one of educators' responsibilities as well.
When Autonomy Breaks: The Hidden Existential Risk of AI
AI risks are typically framed around physical threats to humanity, a loss of control or an accidental error causing humanity's extinction. However, I argue in line with the gradual disempowerment thesis, that there is an underappreciated risk in the slow and irrevocable decline of human autonomy. As AI starts to outcompete humans in various areas of life, a tipping point will be reached where it no longer makes sense to rely on human decision-making, creativity, social care or even leadership. What may follow is a process of gradual de-skilling, where we lose skills that we currently take for granted. Traditionally, it is argued that AI will gain human skills over time, and that these skills are innate and immutable in humans. By contrast, I argue that humans may lose such skills as critical thinking, decision-making and even social care in an AGI world. The biggest threat to humanity is therefore not that machines will become more like humans, but that humans will become more like machines.
A Synthetic Dataset for Personal Attribute Inference Hanna Yukhymenko
Recently powerful Large Language Models (LLMs) have become easily accessible to hundreds of millions of users world-wide. However, their strong capabilities and vast world knowledge do not come without associated privacy risks. In this work, we focus on the emerging privacy threat LLMs pose - the ability to accurately infer personal information from online texts. Despite the growing importance of LLM-based author profiling, research in this area has been hampered by a lack of suitable public datasets, largely due to ethical and privacy concerns associated with real personal data. We take two steps to address this problem: (i) we construct a simulation framework for the popular social media platform Reddit using LLM agents seeded with synthetic personal profiles; (ii) using this framework, we generate SynthPAI, a diverse synthetic dataset of over 7800 comments manually labeled for personal attributes. We validate our dataset with a human study showing that humans barely outperform random guessing on the task of distinguishing our synthetic comments from real ones. Further, we verify that our dataset enables meaningful personal attribute inference research by showing across 18 state-of-theart LLMs that our synthetic comments allow us to draw the same conclusions as real-world data. Combined, our experimental results, dataset and pipeline form a strong basis for future privacy-preserving research geared towards understanding and mitigating inference-based privacy threats that LLMs pose.