Goto

Collaborating Authors

 beksi


How to Train a Robot (Using Artificial Intelligence and Supercomputers)

#artificialintelligence

Examples of 3D point clouds synthesized by the progressive conditional generative adversarial network (PCGAN) for an assortment of object classes. PCGAN generates both geometry and color for point clouds, without supervision, through a coarse to fine training process. UT Arlington computer scientists use TACC systems to generate synthetic objects for robot training. Before he joined the University of Texas at Arlington as an Assistant Professor in the Department of Computer Science and Engineering and founded the Robotic Vision Laboratory there, William Beksi interned at iRobot, the world's largest producer of consumer robots (mainly through its Roomba robotic vacuum). To navigate built environments, robots must be able to sense and make decisions about how to interact with their locale.


How to train a robot (using AI and supercomputers)

#artificialintelligence

Before he joined the University of Texas at Arlington as an Assistant Professor in the Department of Computer Science and Engineering and founded the Robotic Vision Laboratory there, William Beksi interned at iRobot, the world's largest producer of consumer robots (mainly through its Roomba robotic vacuum). To navigate built environments, robots must be able to sense and make decisions about how to interact with their locale. Researchers at the company were interested in using machine and deep learning to train their robots to learn about objects, but doing so requires a large dataset of images. While there are millions of photos and videos of rooms, none were shot from the vantage point of a robotic vacuum. Efforts to train using images with human-centric perspectives failed.


How to train a robot (using AI and supercomputers)

ScienceDaily > Artificial Intelligence

To navigate built environments, robots must be able to sense and make decisions about how to interact with their locale. Researchers at the company were interested in using machine and deep learning to train their robots to learn about objects, but doing so requires a large dataset of images. While there are millions of photos and videos of rooms, none were shot from the vantage point of a robotic vacuum. Efforts to train using images with human-centric perspectives failed. Beksi's research focuses on robotics, computer vision, and cyber-physical systems.


Phys.org - News and Articles on Science and Technology IAM Network

#artificialintelligence

Using a new field of applied mathematics, a computer scientist at The University of Texas at Arlington is working to enhance the perception capabilities of robots. William Beksi, assistant professor of computer science and engineering, is investigating how to effectively process 3-D point cloud data captured from low-cost sensors--information that robots could use to facilitate intelligent tasks in complex scenarios. Beksi's work is funded with a two-year, $175,000 grant from the National Science Foundation. Three-dimensional point clouds are sets of points in space, sometimes with color information, that can be obtained from inexpensive 3-D sensors. However, data generated by these sensors can suffer from anomalies, such as the presence of noise and variation in density of the points. These issues limit the reliability, efficiency and scalability of robotic perception applications that use 3-D point clouds for manipulation, navigation, and object detection and classification.