Goto

Collaborating Authors

 kapur


Towards the Pedagogical Steering of Large Language Models for Tutoring: A Case Study with Modeling Productive Failure

Puech, Romain, Macina, Jakub, Chatain, Julia, Sachan, Mrinmaya, Kapur, Manu

arXiv.org Artificial Intelligence

One-to-one tutoring is one of the most efficient methods of teaching. Following the rise in popularity of Large Language Models (LLMs), there have been efforts to use them to create conversational tutoring systems, which can make the benefits of one-to-one tutoring accessible to everyone. However, current LLMs are primarily trained to be helpful assistants and thus lack crucial pedagogical skills. For example, they often quickly reveal the solution to the student and fail to plan for a richer multi-turn pedagogical interaction. To use LLMs in pedagogical scenarios, they need to be steered towards using effective teaching strategies: a problem we introduce as Pedagogical Steering and believe to be crucial for the efficient use of LLMs as tutors. We address this problem by formalizing a concept of tutoring strategy, and introducing StratL, an algorithm to model a strategy and use prompting to steer the LLM to follow this strategy. As a case study, we create a prototype tutor for high school math following Productive Failure (PF), an advanced and effective learning design. To validate our approach in a real-world setting, we run a field study with 17 high school students in Singapore. We quantitatively show that StratL succeeds in steering the LLM to follow a Productive Failure tutoring strategy. We also thoroughly investigate the existence of spillover effects on desirable properties of the LLM, like its ability to generate human-like answers. Based on these results, we highlight the challenges in Pedagogical Steering and suggest opportunities for further improvements. We further encourage follow-up research by releasing a dataset of Productive Failure problems and the code of our prototype and algorithm.


Should Bollywood fear or embrace AI?

BBC News

Director Shekhar Kapur's debut Indian film, Masoom (1983), followed a woman's journey towards accepting a child born out of her husband's extramarital affair. For the sequel to this emotional film, which had delicately handled the complexities around infidelity and social diktats, Kapur decided to experiment with AI tool ChatGPT.


How Modern Public School Delhi is Including AI & Robotics In Its Curriculum

#artificialintelligence

"We need to get away from rote learning and focus on a more practical learning approach. Engaging students in projects, workshops, webinars, and competitions are the way to go. In addition, it'll encourage them to learn new things since this way of learning is fun. We are very pleased that the New Education Policy (NEP 2020) has already suggested these changes. Hopefully, in a few years, our education system will be more practical-oriented and less theory-oriented," said Alka Kapur, Principal, Modern Public School.


Delhi-born MIT scholar's AI-headset is one of Time's 100 Best Inventions of 2020

#artificialintelligence

Delhi-born Arnav Kapur's Artificial Intelligence-enabled headset, which "augments human cognition and gives voice to those who have lost their ability to speak", has been named as one of the 100 Best Inventions of 2020 by Time. Kapur, a 25-year-old post-doctoral scholar at Massachusetts Institute of Technology (MIT), invented the device called AlterEgo at the MIT Media Lab. Time described AlterEgo as something which "doesn't read your thoughts, but it can enable you to communicate with your computer without touching a keyboard or opening your mouth". The wearer of AlterEgo has to first formulate the question in their mind to use the headset, "a non-invasive, wearable, peripheral neural interface", to carry out a simple task like Googling the weather on your laptop. "The headset's sensors read the signals that formulation sends from your brain to areas you'd trigger if you had said the query aloud, like the back of your tongue and palate," Time said.


Samsung tweets cryptic plans to unveil an "artificial human"

#artificialintelligence

Samsung is teasing plans to unveil what it calls an "artificial human" called NEON at the Consumer Electronics Show, which kicks off January 7. For the past couple of weeks, a series of posts from the official NEON social media accounts have asked the same question in various languages -- "Have you ever met an'ARTIFICIAL'?" Other than those posts and a NEON website that boasts little more than a countdown to CES 2019, though, Samsung hasn't had much to say about the project -- but bizarrely, Indian-British filmmaker and actor Shekhar Kapur has posted quite a bit about it on social media, suggesting he might be involved in the project in some way. "Finally, Artifical [sic] Intelligence that will make you wonder which one of you is real," reads one of Kapur's recent tweets, with another urging CES visitors to stop by the NEON corner to learn more about "an Artificial Intelligence being as your best friend." One thing Samsung will say about NEON is that it is not related to the company's AI-powered digital assistant Bixby.


This wearable lets you give voice commands without saying a word Digital Trends

#artificialintelligence

Imagine if you had a version of Amazon's Alexa or Google Assistant inside your head, capable of feeding you external information whenever you required it, without you needing to say a single word and without anyone else hearing what it had to say back to you. An advanced version of this idea is the basis for future tech-utopian dreams like Elon Musk's Neuralink, a kind of connected digital layer above the cortex that will let our brains tap into hitherto unimaginable machine intelligence. Arnav Kapur, a postdoctoral student with the MIT Media Lab, has a similar idea. And he's already shown it off. The current AlterEgo device prototype looks a bit like one of those popstar Britney mics, as imagined by the designers of the Star Trek: The Next Generation TV show.


With Big Oil After Resource Maximization, Honeywell's Automation Boss Eyes Fresh Opportunities

Forbes - Tech

Over the last 12 to 18 months, almost every chief executive at the helm of a major oil company has outlined an ambition to lower corporate break-even down to the $30-40 per barrel range, a sentiment that's unlikely to alter despite the recent relative strengthening of futures prices. And with'Big Oil' targeting throughput maximization via digitally underpinned performance enhancing levers, the landscape is getting more competitive than ever for vendors helping them reach that process optimization Alamo. Among the vendors toughing it out sensing incremental opportunities in the sector's new found preference for data analytics, cloud computing, automation, Internet of Things (IoT) and artificial intelligence (AI) is Honeywell Process Solutions (HPS), the multinational company's automation and control solutions division. Vimal Kapur, President of Honeywell Process Solutions, says his company is busy demonstrating the value of cross-spectrum digitization and big data to oil and gas majors. Vimal Kapur, President of HPS, says the message from the oil and gas industry is clear – good times or bad, they want their margins to remain on positive turf with the aid of new technologies.


BMW Machine Learning Weekly -- Week 9 – Towards Data Science

#artificialintelligence

News about Machine Learning (ML), Artificial Intelligence (AI) and related research areas. Controlling your gadgets by talking to them is so 2018. In the future, you will not even have to move your lips. A prototype device called AlterEgo, created by Arnav Kapur, a 23-year old MIT Media Lab graduate student, is already making this possible. With Kapur's device -- a 3-D-printed plastic doodad that looks kind of like a skinny white banana attached to the side of his head -- he can flip through TV channels, change the colors of lightbulbs, make expert chess moves, solve complicated arithmetic problems, and order a pizza, all without saying a word or lifting a finger.


This wearable device can respond to your thoughts

#artificialintelligence

MIT researchers have created a wearable device called AlterEgo that can recognize nonverbal prompts, essentially "reading your mind." The system is made up of a device that loops around a user's ear, follows their jawline, and attaches underneath their mouth, and a computer system. The wearable device has electrodes that pick up neuromuscular signals in your jaw and face that are triggered by internal verbalizations (aka saying words in your head) but can't be seen by the human eye. These signals are then given to a machine learning system that analyzes the data, associating specific signals with words. "Our idea was: Could we have a computing platform that's more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?"


MIT's mind-reading AlterEgo headset can hear what you're thinking

#artificialintelligence

Have you ever wished you could simply think a command and your computer would respond? That's the future envisioned by Massachusetts Institute of Technology (MIT) researchers who created AlterEgo, a wearable system that allows you to converse with a computer without using your voice or movement. According to a video on the project from MIT Media Lab, the ultimate goal of AlterEgo is "to combine humans and computers." A computing system and wearable device comprise AlterEgo, a futuristic project led by graduate student Arnav Kapur of the Fluid Interfaces group at MIT. Electrodes, a machine learning system, and bone-conduction headphones help get the job done: the electrodes "pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations -- saying words'in your head' -- but are undetectable to the human eye," according to a MIT News statement. A machine learning system, trained to correspond certain signals with words, receives the signals. The bone-conduction headphones "transmit vibrations through the bones of the face to the inner ear."