coût
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
1 Proofs Asdescribedinthepaper,ProjectedGANtrainingcanbeformulatedasfollows min
In thissupplementarydocument, we first provethe theorem presented in the paper in Section 1. Section 2 provides additional evaluation metrics for StyleGAN-ADA [12], FastGAN [20], and Projected GAN, andFIDofProjected GAN onninemore datasets. Section 4 reports additional experiments. Lastly, we provide details on training configurations, hyperparameters, and compute in Section 5. The supplementary videos show interpolations between random samples of Projected GAN on all datasets. Code, models, and supplementary videos can be found on the project page https://sites.
Efficient Reasoning via Chain of Unconscious Thought
Gong, Ruihan, Liu, Yue, Qu, Wenjie, Du, Mingzhe, He, Yufei, Ma, Yingwei, Chen, Yulin, Liu, Xiang, Wen, Yi, Li, Xinfeng, Wang, Ruidong, Zhu, Xinzhong, Hooi, Bryan, Zhang, Jiaheng
Large Reasoning Models (LRMs) achieve promising performance but compromise token efficiency due to verbose reasoning processes. Unconscious Thought Theory (UTT) posits that complex problems can be solved more efficiently through internalized cognitive processes. Inspired by UTT, we propose a new reasoning paradigm, termed Chain of Unconscious Thought (CoUT), to improve the token efficiency of LRMs by guiding them to mimic human unconscious thought and internalize reasoning processes. Concretely, we first prompt the model to internalize the reasoning by thinking in the hidden layer. Then, we design a bag of token-efficient strategies to further help models reduce unnecessary tokens yet preserve the performance. Our work reveals that models may possess beneficial unconscious thought, enabling improved efficiency without sacrificing performance. Extensive experiments demonstrate the effectiveness of CoUT. Remarkably, it surpasses CoT by reducing token usage by 47.62% while maintaining comparable accuracy, as shown in Figure 1. The code of CoUT is available at this link: https://github.com/Rohan-GRH/CoUT
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science > Problem Solving (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.69)
VeriContaminated: Assessing LLM-Driven Verilog Coding for Data Contamination
Wang, Zeng, Shao, Minghao, Bhandari, Jitendra, Mankali, Likhitha, Karri, Ramesh, Sinanoglu, Ozgur, Shafique, Muhammad, Knechtel, Johann
Large Language Models (LLMs) have revolutionized code generation, achieving exceptional results on various established benchmarking frameworks. However, concerns about data contamination - where benchmark data inadvertently leaks into pre-training or fine-tuning datasets - raise questions about the validity of these evaluations. While this issue is known, limiting the industrial adoption of LLM-driven software engineering, hardware coding has received little to no attention regarding these risks. For the first time, we analyze state-of-the-art (SOTA) evaluation frameworks for Verilog code generation (VerilogEval and RTLLM), using established methods for contamination detection (CCD and Min-K% Prob). We cover SOTA commercial and open-source LLMs (CodeGen2.5, Minitron 4b, Mistral 7b, phi-4 mini, LLaMA-{1,2,3.1}, GPT-{2,3.5,4o}, Deepseek-Coder, and CodeQwen 1.5), in baseline and fine-tuned models (RTLCoder and Verigen). Our study confirms that data contamination is a critical concern. We explore mitigations and the resulting trade-offs for code quality vs fairness (i.e., reducing contamination toward unbiased benchmarking).
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- North America > United States > New York (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- Information Technology (0.46)
- Semiconductors & Electronics (0.46)
Want to Avoid AI Scams? Try These Tips From Our Experts
Thank you to all the readers of WIRED's AI Unlocked newsletter who tuned in for our most recent conversation about money and artificial intelligence scams. I had a blast interacting with readers and answering more questions live. If you missed the broadcast, a full recording is available here for you to watch anytime you'd like, and the previous two livestreams from the AI Unlocked series are available too. Subscribers can watch the first one here and the second one here. Katie Drummond, WIRED's global editorial director, kicked off our discussion this time, telling us how her father was recently approached by a scam caller who tried to trick him with a voice that sounded just like hers.
SmartAgent: Chain-of-User-Thought for Embodied Personalized Agent in Cyber World
Zhang, Jiaqi, Gao, Chen, Zhang, Liyuan, Li, Yong, Yin, Hongzhi
Recent advances in embodied agents with multimodal perception and reasoning capabilities based on large vision-language models (LVLMs), excel in autonomously interacting either real or cyber worlds, helping people make intelligent decisions in complex environments. However, the current works are normally optimized by golden action trajectories or ideal task-oriented solutions toward a definitive goal. This paradigm considers limited user-oriented factors, which could be the reason for their performance reduction in a wide range of personal assistant applications. To address this, we propose Chain-of-User-Thought (COUT), a novel embodied reasoning paradigm that takes a chain of thought from basic action thinking to explicit and implicit personalized preference thought to incorporate personalized factors into autonomous agent learning. To target COUT, we introduce SmartAgent, an agent framework perceiving cyber environments and reasoning personalized requirements as 1) interacting with GUI to access an item pool, 2) generating users' explicit requirements implied by previous actions, and 3) recommending items to fulfill users' implicit requirements. To demonstrate SmartAgent's capabilities, we also create a brand-new dataset SmartSpot that offers a full-stage personalized action-involved environment. To our best knowledge, our work is the first to formulate the COUT process, serving as a preliminary attempt towards embodied personalized agent learning. Our extensive experiments on SmartSpot illuminate SmartAgent's functionality among a series of embodied and personalized sub-tasks. We will release code and data upon paper notification at https://github.com/tsinghua-fib-lab/SmartAgent.
- Asia > China > Beijing > Beijing (0.05)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- Oceania > Australia > Queensland (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
PIM-DRAM:Accelerating Machine Learning Workloads using Processing in Memory based on DRAM Technology
Roy, Sourjya, Ali, Mustafa, Raghunathan, Anand
Deep Neural Networks (DNNs) have gained significant interest in the recent past for plethora of applications such as image and video analytics, language translation, and medical diagnosis. High memory bandwidth is required to keep up with the needs of data-intensive DNN applications when implemented on a von-Neumann hardware architecture as majority of the data resides in the main memory. Therefore, processing in memory can provide a promising solution for the memory wall bottleneck for ML workloads. In this work, we propose a DRAM-based processing-in-memory (PIM) multiplication primitive coupled with intra-bank accumulation to accelerate matrix vector operations in ML workloads. Moreover, we propose a processing-in-memory DRAM bank architecture, data mapping and dataflow based on the proposed primitive. System evaluations performed on networks like AlexNet, VGG16 and ResNet18 show that the proposed architecture, mapping, and data flow can provide up to 23x and 6.5x benefits over a GPU and an ideal conventional (non-PIM) baseline architecture with infinite compute bandwidth, respectively.
Class and Object , function overloading Explanation in c++ language
Using of funcation in different place in program with same name or the process of have two or more function with the same name but parameter are different called function overloading. Public data member are also accessed in the same way given however private data member are not allowed to accessed directly by object.