input method
Rethinking Data Input for Point Cloud Upsampling
In recent years, point cloud upsampling has been widely applied in fields such as 3D reconstruction and surface generation. However, existing point cloud upsampling inputs are all patch based, and there is no research discussing the differences and principles between point cloud model full input and patch based input. In order to compare with patch based point cloud input, this article proposes a new data input method, which divides the full point cloud model to ensure shape integrity while training PU-GCN. This article was validated on the PU1K and ABC datasets, but the results showed that Patch based performance is better than model based full input i.e. Average Segment input. Therefore, this article explores the data input factors and model modules that affect the upsampling results of point clouds.
Generative Input: Towards Next-Generation Input Methods Paradigm
Ding, Keyu, Wang, Yongcan, Xu, Zihang, Jia, Zhenzhen, Wang, Shijin, Liu, Cong, Chen, Enhong
Since the release of ChatGPT, generative models have achieved tremendous success and become the de facto approach for various NLP tasks. However, its application in the field of input methods remains under-explored. Many neural network approaches have been applied to the construction of Chinese input method engines(IMEs).Previous research often assumed that the input pinyin was correct and focused on Pinyin-to-character(P2C) task, which significantly falls short of meeting users' demands. Moreover, previous research could not leverage user feedback to optimize the model and provide personalized results. In this study, we propose a novel Generative Input paradigm named GeneInput. It uses prompts to handle all input scenarios and other intelligent auxiliary input functions, optimizing the model with user feedback to deliver personalized results. The results demonstrate that we have achieved state-of-the-art performance for the first time in the Full-mode Key-sequence to Characters(FK2C) task. We propose a novel reward model training method that eliminates the need for additional manual annotations and the performance surpasses GPT-4 in tasks involving intelligent association and conversational assistance. Compared to traditional paradigms, GeneInput not only demonstrates superior performance but also exhibits enhanced robustness, scalability, and online learning capabilities.
Rusian Language Data Analyst - Barcelona at TransPerfect - Barcelona, Spain
We are looking for Rusian Speakers to join us on a new innovative and interesting project to improve Artificial Intelligence and technology that makes our everyday lives better (i.e., speech or text recognition, input methods, keyboard/swipe technology, or other areas of human-machine interaction related to languages). Language Data Analysts will focus on speech or text recognition, input methods, keyboard/swipe technology, or other areas of human-machine interaction related to languages. No previous experience or training in the field is required - we will provide training. DataForce by TransPerfect is part of the TransPerfect family of companies, the world's largest provider of language and technology solutions for global business, with offices in more than 100 cities worldwide. We offer high-quality data for Human-Machine Interaction to some of the most prestigious technology companies in the world.
- Information Technology > Artificial Intelligence > Machine Learning (0.69)
- Information Technology > Data Science > Data Mining > Big Data (0.65)
Registering Articulated Objects With Human-in-the-loop Corrections
Hagenow, Michael, Senft, Emmanuel, Laske, Evan, Hambuchen, Kimberly, Fong, Terrence, Radwin, Robert, Gleicher, Michael, Mutlu, Bilge, Zinn, Michael
Remotely programming robots to execute tasks often relies on registering objects of interest in the robot's environment. Frequently, these tasks involve articulating objects such as opening or closing a valve. However, existing human-in-the-loop methods for registering objects do not consider articulations and the corresponding impact to the geometry of the object, which can cause the methods to fail. In this work, we present an approach where the registration system attempts to automatically determine the object model, pose, and articulation for user-selected points using nonlinear fitting and the iterative closest point algorithm. When the fitting is incorrect, the operator can iteratively intervene with corrections after which the system will refit the object. We present an implementation of our fitting procedure for one degree-of-freedom (DOF) objects with revolute joints and evaluate it with a user study that shows that it can improve user performance, in measures of time on task and task load, ease of use, and usefulness compared to a manual registration approach. We also present a situated example that integrates our method into an end-to-end system for articulating a remote valve.
- North America > United States > Wisconsin > Dane County > Madison (0.05)
- North America > United States > Texas > Harris County > Houston (0.04)
- North America > United States > California > Santa Clara County > Mountain View (0.04)
- Asia > Japan > Honshū > Kansai > Hyogo Prefecture > Kobe (0.04)
- Government > Regional Government > North America Government > United States Government (0.94)
- Government > Space Agency (0.69)
Characterizing Input Methods for Human-to-robot Demonstrations
Praveena, Pragathi, Subramani, Guru, Mutlu, Bilge, Gleicher, Michael
Human demonstrations are important in a range of robotics applications, and are created with a variety of input methods. However, the design space for these input methods has not been extensively studied. In this paper, focusing on demonstrations of hand-scale object manipulation tasks to robot arms with two-finger grippers, we identify distinct usage paradigms in robotics that utilize human-to-robot demonstrations, extract abstract features that form a design space for input methods, and characterize existing input methods as well as a novel input method that we introduce, the instrumented tongs. We detail the design specifications for our method and present a user study that compares it against three common input methods: free-hand manipulation, kinesthetic guidance, and teleoperation. Study results show that instrumented tongs provide high quality demonstrations and a positive experience for the demonstrator while offering good correspondence to the target robot.
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Research Report > New Finding (0.66)
Neural Networks and the Future of Machine Learning - insideBIGDATA
Not long ago, many would scoff at the notion that a machine is "learning," "doing" or "knowing." But neural networks and artificial intelligence (AI) technologies are layering those skillsets together to perform increasingly complicated, human-like functions. Google DeepMind, for example, is one of few very advanced neural networks that are driving the future of machine learning. While machines have previously been able to read and answer our questions about news articles, for example, their knowledge was often limited by the length of a piece or driven to brute force computation. Newly-developed algorithms enable those systems to learn from experience and online data – leading to a more sophisticated understanding of topics and language.
Why we are still a long way from a 'killer' chatbot
It all started with a conversational interface, and then bots just took over the debate. Even in this great piece, while Dan Grover mentioned the ubiquity of chat, he also highlighted the fact that we should recognize the limitations of pure text-based input method. Looking at the history of bots, I struggle to see bots overtaking the streamlined GUI experience even now -- here's one recent subpar experience out of many. There's a good reason why we accepted the web and made it part of our everyday lives. It's very naive and sometimes bordering funny to think that bots will be panacea to everything we do in life and will take over humanity one day.
Deep Learning is Changing System Design
Artificial intelligence (AI) is becoming increasingly ubiquitous within the technology industry, with capabilities that are much more practical than most consumers may think. Smarter e-mail spam filters and autonomous vehicles are just two examples of how deep-learning systems and AI technologies enable machines to better interact with their surrounding environment and provide vast benefits to users. By using a series of layers within a neural network to analyze data, deep-learning systems continue to change the way computers see, hear, identify, and even respond to objects in the real world. While the combination of such skills has made it possible for machines to perform increasingly complicated, human-like functions, the future of deep learning is now being dictated by user-driven input methods. Neural-network algorithms are among the most interesting machine-learning techniques.
How 'cognitive ergonomics' will humanise AI technology Information Age
Whether exchanging dialogue with our smartphones or scribbling characters on touchscreens, the Human-Machine Interfaces (HMI) we interact with today are intuitive and foster'easy to use' input methods. Driven by speech, handwriting and touch, our technologies are continually progressing towards intuitive communication between humans and machines, and we are continuing to march forward. However, several advancements in artificial intelligence technology, such as machine and deep learning capabilities, have paved the way for the humanistion of our machines and devices. And there's one particular development in the AI space which has pioneered the ability for seamless human-to-machine interaction - cognitive ergonomics. Through cognitive ergonomics, system designs that allows machines to adapt and operate considering mental workloads and other factors, we are able to communicate with our devices as easy as writing a note on paper.
- Health & Medicine > Consumer Health (0.70)
- Information Technology (0.49)
Has AI become something we can't live without? Information Age
Artificial intelligence (AI) makes difficult tasks possible, such as sorting and recognising patterns in incredibly large data sets. The most challenging problems often have unexpected input and are often referred to as AI-compete or AI-difficult, implying the need for human-like computation. While some might think of AI as technology mostly used for complex visual tasks – or even as a far-fetched concept only found in science fiction – it's used in more ways than most people realise. That raises the question: could modern society get by without this fast-growing technology? Depending on the source, some claim AI has been around since ancient times, when the Greeks had myths about robots, and engineers from Egypt and China built automatons.
- Asia > China (0.25)
- Africa > Middle East > Egypt (0.25)
- North America > United States (0.05)
- Banking & Finance (0.32)
- Leisure & Entertainment > Games > Chess (0.31)