In December, the University of Texas at Austin's computer science department announced that it would stop using a machine-learning system to evaluate applicants for its Ph.D. program due to concerns that encoded bias may exacerbate existing inequities in the program and in the field in general. This move toward more inclusive admissions practices is a rare (and welcome) exception to a worrying trend in education: Colleges, standardized test providers, consulting companies, and other educational service providers are increasingly adopting predatory, discriminatory, and outright exclusionary student data practices. Student data has long been used as a college recruiting and admissions tool. In 1972, College Board, the company that owns the PSAT, the SAT, and the AP Exams, created its Student Search Service and began licensing student names and data profiles to colleges (hence the college catalogs that fill the mail boxes of high school students who have taken the exams). Today, College Board licenses millions of student data profiles every year for 47 cents per examinee.
Last month, the British television network Channel 4 broadcast an "alternative Christmas address" by Queen Elizabeth II, in which the 94-year-old monarch was shown cracking jokes and performing a dance popular on TikTok. Of course, it wasn't real: The video was produced as a warning about deepfakes--apparently real images or videos that show people doing or saying things they never did or said. If an image of a person can be found, new technologies using artificial intelligence and machine learning now make it possible to show that person doing almost anything at all. The dangers of the technology are clear: A high-school teacher could be shown in a compromising situation with a student, a neighbor could be depicted as a terrorist. Can deepfakes, as such, be prohibited under American law?
For this first time in his life, Pete Peeks was able to use both hands to hang Christmas lights outside his house this year -- thanks to the help of a high school robotics team. Peeks, 38, was born without the full use of his right hand, and though many may take gripping a nail, hammering it in and stringing holiday lights for granted, Peeks said it was beyond his wildest dreams. Early this month, he became one of the latest clients of the Sequoyah High School Robotics Team in Canton, Georgia. The team has designs and 3D- printed custom prosthesis to send for free to people around the world who need them. And as Americans gather for the winter holidays, the students will be at home continuing their work.
Picture this: a small group of middle school students are learning about ancient Egypt, so they strap on a virtual reality headset and, with the assistance of an artificial intelligence tour guide, begin to explore the Pyramids of Giza. The teacher, also journeying to one of the oldest known civilizations via a VR headset, has assigned students to gather information to write short essays. During the tour, the AI guide fields questions from students and points them to specific artifacts and discuss what they see. In preparing the AI-powered lesson on Egypt, the teacher beforehand would have worked with the AI program to craft a lesson plan that not only dives deep into the subject, but figures out how to keep the group moving through the virtual field trip and how to create more equal participation during the discussion. In that scenario, the AI listens, observes and interacts naturally to enhance a group learning experience, and to make a teacher's job easier.
Determining the readability of a text is the first step to its simplification. In this paper, we present a readability analysis tool capable of analyzing text written in the Bengali language to provide in-depth information on its readability and complexity. Despite being the 7th most spoken language in the world with 230 million native speakers, Bengali suffers from a lack of fundamental resources for natural language processing. Readability related research of the Bengali language so far can be considered to be narrow and sometimes faulty due to the lack of resources. Therefore, we correctly adopt document-level readability formulas traditionally used for U.S. based education system to the Bengali language with a proper age-to-age comparison. Due to the unavailability of large-scale human-annotated corpora, we further divide the document-level task into sentence-level and experiment with neural architectures, which will serve as a baseline for the future works of Bengali readability prediction. During the process, we present several human-annotated corpora and dictionaries such as a document-level dataset comprising 618 documents with 12 different grade levels, a large-scale sentence-level dataset comprising more than 96K sentences with simple and complex labels, a consonant conjunct count algorithm and a corpus of 341 words to validate the effectiveness of the algorithm, a list of 3,396 easy words, and an updated pronunciation dictionary with more than 67K words. These resources can be useful for several other tasks of this low-resource language. We make our Code & Dataset publicly available at https://github.com/tafseer-nayeem/BengaliReadability} for reproduciblity.
The critical importance of tech skills across industries sends a clear signal: In order to meet the expectations of future administrators and employers, elementary to high school students will need to learn expertise unavailable to their parents' generation. Today's leading innovation technology is machine learning, and to address the need for this vital skill set, the latest offering from the longtime partnership of Microsoft and code.org is a new course in artificial intelligence (AI) and its societal and ethical implications designed for students in elementary and high school. AI's relevance cannot be understated, as it is the very basis for self-driving cars, but it also powers devices we've already become accustomed to, such as Amazon Alexa, interactive programming, telemed appointments, and online learning. Those are some very big responsibilities to be tackled by Gen Z. As code.org points out,despite the great benefits to society, the ethical impact can't be ignored: "How does algorithmic bias impact social justice or deep fakes impact democracy? How does society cope with rapid job automation? By learning how to consider the ethical issues that AI raises, these future computer scientists will be better able to envision the appropriate safeguards that help to maximize the benefits of AI technologies and reduce their risks."
The age of artificial intelligence (AI) has arrived, changing the world around us in exciting and unpredictable ways. We are getting accustomed to AI and our children will be highly dependent on it. AI helps bring about new careers, discover new drugs, augment our senses, and influence both our interaction with the world and our understanding of it. One day, it may help us eradicate war, disease, and poverty. Unfortunately, we cannot fully predict the effects of AI on society.
Facial emotion detection is a common issue focused on in the field of cognitive science. An attempt to understand what exactly we as humans see in each other that gives us insight into other emotions is a challenge we can approach from an artificial intelligence side. While I don't have enough experience in psychology or even artificial intelligence to determine these factors, we can always start off by building a model to determine at least the start of this question. Fer2013 is a dataset with pictures of individuals labeled with the emotions of anger, happiness, surprise, disgust, and sadness. When testing humans on the dataset to correctly identify the facial expression of a set of pictures within the set, the accuracy is 65%.
Teacher Sherisa Nailor helps Parker Drawbaugh, left, and Jacob Knouse in their Small-Animal Science class at Big Spring High School in Newville, Pa. It's a dilemma schools have struggled with for years: Should teachers spend the precious time they have helping students dig deeply into a specific issue, problem, or question? Or should they teach more broadly about a wide variety of topics? The argument for the former approach--called "deep learning"--is that it improves student engagement and prepares kids to be better problem solvers in a world with increasingly complex challenges around health, economics, social justice, and climate change. A broader approach, the counter argument goes, introduces students to a greater mix of topics, giving them a better sense of all the issues and problems society is facing. Taking that "deep learning" approach is now more difficult than ever, as students are stuck at home learning remotely either full time or part time, or in socially distanced classrooms where collaboration, project-based learning, and lab experiments are hard, if not impossible, to do.
Artificial intelligence is an increasingly prevalent part of our everyday lives. From live-updating, turn-by-turn driving directions to responsive voice-controlled digital assistants--all in the palms of our hands--we are constantly interacting with computer programming where machines learn from experience and adjust to new data to perform human-like tasks. For children growing up right now, AI will undoubtedly be a part of their future lives and jobs. So, it's critical that students understand computational thinking and know how machine learning works. "It's important that kids leave our classrooms with real-world knowledge and industry-standard software and technical experience under their belt," says Teresa Blizman-Schmitt, a sixth through eighth grade computer science and business education teacher in Vernon, CT.