Goto

Collaborating Authors

 perez


How ancient tech is thwarting AI cheating in the classroom

PCWorld

Nearly two years ago, ChatGPT's AI writing powers set off a firestorm in classrooms. How would teachers be able to determine which assignments were actually authored by the student? A host of AI-powered services answered the call. Today, there are even more services promising to catch AI cheaters. "My hand cramped up so much," my eldest son complained about his AP World History course he took last year, and the requirement to handwrite all papers and tests because of AI concerns.


Can AI be used ethically for school work? Here's what teachers say

PCWorld

Can AI be used ethically for school work? It depends upon who you ask -- quite literally. That's because less than two years after ChatGPT was originally released in November 2022, the attitudes towards AI in the classroom still vary widely. High schools have viewed AI as a crutch at best, and at worst as a tool for cheating. But several universities leave generative AI use entirely up to the discretion of the person teaching the course.


Omega-Regular Reward Machines

Hahn, Ernst Moritz, Perez, Mateo, Schewe, Sven, Somenzi, Fabio, Trivedi, Ashutosh, Wojtczak, Dominik

arXiv.org Artificial Intelligence

Reinforcement learning (RL) is a powerful approach for training agents to perform tasks, but designing an appropriate reward mechanism is critical to its success. However, in many cases, the complexity of the learning objectives goes beyond the capabilities of the Markovian assumption, necessitating a more sophisticated reward mechanism. Reward machines and omega-regular languages are two formalisms used to express non-Markovian rewards for quantitative and qualitative objectives, respectively. This paper introduces omega-regular reward machines, which integrate reward machines with omega-regular languages to enable an expressive and effective reward mechanism for RL. We present a model-free RL algorithm to compute epsilon-optimal strategies against omega-egular reward machines and evaluate the effectiveness of the proposed algorithm through experiments.


AI might have prevented Boston Marathon bombing, but with risks: former police commissioner

FOX News

Twitter CEO Elon Musk provides insight on the consequences of developing artificial intelligence and the potential impact on elections. Rapidly developing artificial intelligence technology may have prevented the Boston Marathon bombing, but it might also become law enforcement's newest nightmare. That was the message from Ed Davis, who was Boston's police commissioner during the deadly terrorist attack on April 15, 2013. A decade after that plot that killed three people and injured hundreds, he told Fox News Digital that AI "will ultimately improve investigations and allow many dangerous criminals to be brought to justice." "Use of artificial intelligence systems applied to secret and top-secret databases could very well have prevented the Boston Marathon bombing," he said.


Artificial Intelligence And The Disruption Phase. It's Good.

#artificialintelligence

When Artificial Intelligence (AI) technologies began to make significant advances around two decades ago, societies began to talk more earnestly about both its possibilities and dangers. Now, with the rise of Generative AI (GAI) and tools like ChatGPT and DALL-E 2, AI is starting to enter mainstream industry and society. AI is about to become a disruptive, revolutionary technology. Things are about to get very interesting. Why and what does this mean.


Energy Grids Plug into AI for a Brighter, Cleaner Future

#artificialintelligence

Electric utilities are taking a course in machine learning to create smarter grids for tough challenges ahead. The winter 2021 megastorm in Texas left millions without power. Grid failures the past two summers sparked devastating wildfires amid California's record drought. "Extreme weather events of 2021 highlighted the risks climate change is introducing, and the importance of investing in more resilient electricity grids," said a May 2021 report from the International Energy Agency, a group with members from more than 30 countries. It called for a net-zero carbon grid by 2050, fueled by hundreds more gigawatts in renewable sources.


Growing Robot Minds – MetaDevo AI Blog

#artificialintelligence

One way to increase the intelligence of a robot might be to train it with a series of missions, analogous to the missions or levels in a video game. In a developmental robot, the training would not be simply learning--its "brain" structure would actually change. Biological development shows some extremes that a robot could go through, like starting with a small seed that constructs itself, or creating too many neural connections and then in a later phase deleting a whole bunch of them. As another example of development vs. learning, a simple artificial neural network is trained when the weights have been changed after a series of training inputs (and error correction if it is supervised). It would be like growing completely new nodes, network layers, or new networks entirely during each training level. Or you can imagine the difference between decorating a skyscraper (learning) and building a skyscraper (development).


Perez

AAAI Conferences

We propose improved algorithms for defining the most common operations on Multi-Valued Decision Diagrams (MDDs): creation, reduction, complement, intersection, union, difference, symmetric difference, complement of union and complement of intersection. Then, we show that with these algorithms and thanks to the recent development of an efficient algorithm establishing arc consistency for MDD based constraints (MDD4R), we can simply solve some problems by modeling them as a set of operations between MDDs. We apply our approach to the regular constraint and obtain competitive results with dedicated algorithms. We also experiment our technique on a large scale problem: the phrase generation problem and we show that our approach gives equivalent results to those of a specific algorithm computing a complex automaton.


Perez

AAAI Conferences

Metadata are associated to most of the information we produce in our daily interactions and communication in the digital world. Yet, surprisingly, metadata are often still categorized as non-sensitive. Indeed, in the past, researchers and practitioners have mainly focused on the problem of the identification of a user from the content of a message. In this paper, we use Twitter as a case study to quantify the uniqueness of the association between metadata and user identity and to understand the effectiveness of potential obfuscation strategies. More specifically, we analyze atomic fields in the metadata and systematically combine them in an effort to classify new tweets as belonging to an account using different machine learning algorithms of increasing complexity. We demonstrate that, through the application of a supervised learning algorithm, we are able to identify any user in a group of 10,000 with approximately 96.7% accuracy. Moreover, if we broaden the scope of our search and consider the 10 most likely candidates we increase the accuracy of the model to 99.22%. We also found that data obfuscation is hard and ineffective for this type of data: even after perturbing 60% of the training data, it is still possible to classify users with an accuracy higher than 95%. These results have strong implications in terms of the design of metadata obfuscation strategies, for example for data set release, not only for Twitter, but, more generally, for most social media platforms.


Why Intel believes confidential computing will boost AI and machine learning

#artificialintelligence

Companies are collecting increasing amounts of data, a trend that is driving the development of better analytical tools and tougher security. Analysis and security are now converging as confidential computing prepares to deliver a critical boost to artificial intelligence. Intel has been investing heavily in confidential computing as a way to expand the amount and types of data companies will manage through cloud services. According to Intel Fellow Ron Perez, who works on security architecture with the Intel Data Center Group, the company believes the emerging security standard will allow enterprises and large organizations to explore new ways to share the data needed to fuel AI and machine learning. "We see this as a long-term effort," Perez said.