Goto

Collaborating Authors

 rand



AI can influence voters' minds. What does that mean for democracy?

New Scientist

AI can influence voters' minds. What does that mean for democracy? AI chatbots may have the power to influence voters' opinions Does the persuasive power of AI chatbots spell the beginning of the end for democracy? In one of the largest surveys to date exploring how these tools can influence voter attitudes, AI chatbots were more persuasive than traditional political campaign tools including advertisements and pamphlets, and as persuasive as seasoned political campaigners. But at least some researchers identify reasons for optimism in the way in which the AI tools shifted opinions.


A Experimental Settings

Neural Information Processing Systems

All experiments were conducted on a single NVIDIA RTX 3090 GPU. The obtained text features were also projected into the CLIP latent space via an FC layer. The test images followed the same process except that the center cropping was used. Besides, the classification accuracy is adopted for Adience. Image Aesthetics Assessment An ImageNet pre-trained VGG-16 was used as the image encoder.


Improving Continual Learning of Knowledge Graph Embeddings via Informed Initialization

Pons, Gerard, Bilalli, Besim, Queralt, Anna

arXiv.org Artificial Intelligence

Many Knowledege Graphs (KGs) are frequently updated, forcing their Knowledge Graph Embeddings (KGEs) to adapt to these changes. To address this problem, continual learning techniques for KGEs incorporate embeddings for new entities while updating the old ones. One necessary step in these methods is the initialization of the embeddings, as an input to the KGE learning process, which can have an important impact in the accuracy of the final embeddings, as well as in the time required to train them. This is especially relevant for relatively small and frequent updates. We propose a novel informed embedding initialization strategy, which can be seamlessly integrated into existing continual learning methods for KGE, that enhances the acquisition of new knowledge while reducing catastrophic forgetting. Specifically, the KG schema and the previously learned embeddings are utilized to obtain initial representations for the new entities, based on the classes the entities belong to. Our extensive experimental analysis shows that the proposed initialization strategy improves the predictive performance of the resulting KGEs, while also enhancing knowledge retention. Furthermore, our approach accelerates knowledge acquisition, reducing the number of epochs, and therefore time, required to incrementally learn new embeddings. Finally, its benefits across various types of KGE learning models are demonstrated.





DropEdge (%)

Neural Information Processing Systems

Reviewer #1: Thank you for the positive comments and suggestions! Below we address your questions in detail. It would be better if authors can try dropedge and sampling methods, instead of only adopting dropnode. Table 6 shows the classification results on benchmarks. It would be better if authors can provide the performance under different training ratio.


How to Select Which Active Learning Strategy is Best Suited for Y our Specific Problem and Budget Guy Hacohen, Daphna Weinshall School of Computer Science & Engineering

Neural Information Processing Systems

In the traditional supervised learning framework, active learning enables the learner to actively engage in the construction of the labeled training set by selecting a fixed-sized subset of unlabeled examples for labeling by an oracle, where the number of labels requested is referred to as the budget .


A Appendix A.1 Tabular experiments A.1.1 Implementation Details

Neural Information Processing Systems

As these are tabular domains, each state is defined by a single feature for both the actor and the critic. Full hyperparameters are listed here: Hyperparameter V alue Actor lr 1e-1 Critic lr 1e-1 Discount 0.99 Max Steps 1000 Temperature 1e-1 GCN: hidden size 64 GCN: α 0.6 GCN: η 1e1 Table 2: Hyperparameters for the FourRooms and FourRoomsTraps domain A.1.2 In Fig we notice a close ressemblance in the final output. This also results in very similar empirical performance. The agent is scanning the room in search of the red box (the goal).