Goto

Collaborating Authors

Results


ACT-1: How Adept Is Building the Future of AI with Action Transformers

#artificialintelligence

One of AI's most ambitious goals is to build systems that can do everything a human can. GPT-3 can write and Stable Diffusion can paint, but neither can interact with the world directly. AI companies have been trying to create intelligent agents this way for 10 years. This seems to be changing now. One of my latest articles covers Google's PaLM-SayCan (PSC), a robot powered by PaLM, the best large language model to date. PSC's language module can interpret human requests expressed in natural language and transform them into high-level tasks that can be further broken down into elemental actions.


What is Artificial Intelligence ? Dr Pratik Mungekar

#artificialintelligence

We live in a rapidly changing world. A study of nature leads me to believe adaptation is the key to survival. Whatever you pursue make sure you keep current with current events and how they may affect your plans. From Siri to google assistant, self-driving cars, and ridesharing cabs like Uber, it's Artificial Intelligence that makes businesses intelligent and smarter. Have you ever imagined how cab booking apps estimate the price of your ride even before you take it?


Top Artificial Intelligence (AI) Books to Read in 2022-2023

#artificialintelligence

The ability of a machine to reason, learn, and solve problems in the same ways that people do constitutes artificial intelligence. The beautiful thing about artificial intelligence is that you can construct a computer with pre-programmed algorithms that can function with its own intelligence, so you don't need to pre-program a machine to perform something. One of the most frequently used buzzwords in technology today is artificial intelligence. We're learning more and more about how computers can mimic human thought processes and even complete jobs that were once thought too complex for machines to complete, thanks to innovations like Siri and Alexa. For a while now, the concept of artificial intelligence has occupied the minds of philosophers, technologists, and science fiction writers.


Hiwonder JetHexa ROS Hexapod Robot Kit Powered by Jetson Nano with Lidar Depth Camera Support SLAM Mapping and Navigation

#artificialintelligence

Product Description Product Description JetHexa is an open source hexapod robot based on Robot Operating System (ROS). It is armed with high-performance hardware, such as NVIDIA Jetson Nano, intelligent serial bus servos, Lidar and HD camera/ 3D depth camera, which can implement robot motion control, mapping and navigation, tracking and obstacle avoidance, custom prowling, human feature recognition, somatosensory interaction and other functions. Adopted novel inverse kinematics algorithm, supporting tripod and ripple gaits and with highly configurable body posture, height and speed, JetHexa will bring user ultimate using experience.JetHexa not only serves as an advanced platform for user to learn and verify hexapod movement, but also provides solutions for ROS development. To help user embark on a new journey of ROS hexapod robotic world, ample ROS and robot learning materials and tutorials are provided. JetHexa is an open source hexapod robot based on Robot Operating System (ROS). It is armed with NVIDIA Jetson Nano, intelligent serial bus servos, Lidar and Monocular camera/ 3D depth camera to implement robot motion control, mapping and navigation, tracking and obstacle avoidance, custom prowling, human feature recognition, somatosensory interaction, etc. Featuring novel inverse kinematics algorithm, tripod and ripple gaits, highly configurable body posture, height and speed, JetHexa will bring user ultimate using experience.JetHexa not only serves as an advanced platform for user to learn and verify hexapod movement, but also provides solutions for ROS development. To help user embark on a new journey of ROS hexapod robotic world, ample ROS and robot learning materials and tutorials are provided. Jetson Nano Control System Jetson Nano Control System NVIDIA Jetson Nano is able to run mainstream deep learning frameworks, such as TensorFlow, PyTorch, Caffe/ Caffe2, Keras, MXNet, and provides powerful computing power for massive AI projects. Powered by Jetson Nano, JetHexa can achieve image recognition, object detection and positioning, pose estimation, semantics segmentation, intelligent analysis and other almighty functions. NVIDIA Jetson Nano is able to run mainstream deep learning frameworks, such as TensorFlow, PyTorch, Caffe/ Caffe2, Keras, MXNet, and provides powerful computing power for massive AI projects. Powered by Jetson Nano, JetHexa can achieve image recognition, object detection and positioning, pose estimation, semantics segmentation, intelligent analysis and other almighty functions. Monocular Camera (with 2DOF Pan-tilt) Monocular camera can rotate up, down, right and left, as well as realize color tracking, autonomous driving and so on. 3D Depth Camera Depth camera can process depth map data, and realize 3D vision mapping navigation. ROS Highlights ROS Highlights 2D Lidar Mapping, Navigation and Obstacle Avoidance JetHexa is loaded with high-performance EAI G4 Lidar that supports mapping with diverse algorithms including Cartographer, Hector, Karto and Gmapping, path planning, fixed-point navigation as well as obstacle avoidance in navigation. RTAB-VSLAM 3D Vision Mapping and Navigation Supporting 3D color mapping in two ways, pure RTAB vision and fusion of vision and Lidar, JetHexa is able to navigate and avoid obstacle in 3D map and execute global relocation. Multi-point Navigation and Obstacle Avoidance Lidar can detect the surroundings in real time, and let JetHexa avoid the obstacles during muti-point navigation. Depth Image Data, Point Cloud Image Through the corresponding API, JetHexa can obtain depth image, color image and point cloud image of the camera. 2D Lidar Mapping, Navigation and Obstacle Avoidance JetHexa is loaded with high-performance EAI G4 Lidar that supports mapping with diverse algorithms including Cartographer, Hector, Karto and Gmapping, path planning, fixed-point navigation as well as obstacle avoidance in navigation. RTAB-VSLAM 3D Vision Mapping and Navigation Supporting 3D color mapping in two ways, pure RTAB vision and fusion of vision and Lidar, JetHexa is able to navigate and avoid obstacle in 3D map and execute global relocation. Multi-point Navigation and Obstacle Avoidance Lidar can detect the surroundings in real time, and let JetHexa avoid the obstacles during muti-point navigation. Depth Image Data, Point Cloud Image Through the corresponding API, JetHexa can obtain depth image, color image and point cloud image of the camera. KCF Target Tracking Based on KCF filtering algorithm, the robot can track the selected target. Depth Camera Obstacle Recognition With the help of depth camera, it can detect the obstacle ahead and pass through the obstacle. Custom Path Prowling User can customize the path and order the robot to prowl along the designed path. Lidar Tracking By scanning the front moving object, Lidar makes robot capable of target tracking. Lidar Guarding Lidar accounts for the role in guarding the surroundings and ringing the alarm when detecting intruder. Color Recognition and Tracking Skilled in color recognition and tracking, the robot can be set to execute different actions according to the colors. Group Control A group of JetHexa can be controlled by only one wireless handle to perform actions uniformly and simultaneously. Intelligent Formation A batch of robots can be controlled to patrol in different formations. Canyon Crossing When Lidar scans the canyon ahead, the robot will adjust its posture and direction to pass through it. Auto Line Following The robot has the ability to recognize the line in color designated by user and prowl following the line. Tag Recognition and Tracking JetHexat is an expert in recognizing and positioning a few AR Tags at the same time. Posture Detection Built-in IMU sensor can detect the body posture in real time. KCF Target Tracking Based on KCF filtering algorithm, the robot can track the selected target. Depth Camera Obstacle Recognition With depth camera, it can detect and pass through the obstacle. Custom Path Prowling User can customize the path and order the robot to prowl along the designed path. Lidar Tracking By scanning the front moving object, Lidar makes robot capable of target tracking. Lidar Guarding Lidar accounts for the role in guarding the surroundings and ringing the alarm when detecting intruder. Color Recognition and Tracking Skilled in color recognition and tracking, the robot can be set to execute different actions according to the colors. Group Control A group of JetHexa can be controlled by only one wireless handle to perform actions uniformly and simultaneously. Tag Recognition and Tracking JetHexat is an expert in recognizing and positioning a few AR Tags at the same time. Canyon Crossing When Lidar scans the canyon ahead, the robot will adjust its posture and direction to pass through it. Auto Line Following The robot has the ability to recognize the line in color designated by user and prowl following the line. Intelligent Formation A batch of robots can be controlled to patrol in different formations. Posture Detection Built-in IMU sensor can detect the body posture in real time. Upgraded Inverse Kinematics Algorithm Upgraded Inverse Kinematics Algorithm One-click Gait Switching One-click Gait Switching JetHexa supports switching between tripod gait and ripple gait at will. JetHexa supports switching between tripod gait and ripple gait at will. "Moonwalk" in Fixed Speed and Height Through inverse kinematics algorithm, JetHexa can maintain stable during SLAM mapping, and moonwalk in a constant speed. Pitch Angle and Roll Angle Adjustment Highly configurable body posture, center of gravity, pitch angle and roll angle enables the hexapod robot to overcome all type of complicated terrains. Direction, Speed, Height and Stride Adjustment JetHexa can make turn and change lane as moving, and support stepless adjustment in linear velocity, angular velocity, stance, height and stride. Body Self-balancing Body Self-balancing The built-in IMU sensor is in charge of detecting the body posture in real time so as to arrange for the robot to adjust its joints to balance the body. Deep Learning and Model Training for AI Creativity Adopting GoogLeNet, Yolo, mtcnn and other neural networks, JetHexa masters deep learning to train models. Through loading various models, it can recognize the targets quickly so as to implement complex AI projects, including waste sorting, mask identification, emotion recognition, etc. Waste Sorting Quick to recognize different waste cards, and place them in the corresponding area in terms of the category. Mask Identification With strong computing power, JetHexa’s AI function can be expanded through deep learning. Emotion Recognition JetHexa is able to recognize facial features accurately to catch every nuance of expression. Deep Learning and Model Training for AI Creativity Adopting GoogLeNet, Yolo, mtcnn and other neural networks, JetHexa masters deep learning to train models. Through loading various models, it can recognize the targets quickly so as to implement complex AI projects, including waste sorting, mask identification, emotion recognition, etc. Waste Sorting Quick to recognize different waste cards, and place them in the corresponding area in terms of the category. Mask Identification With strong computing power, JetHexa’s AI function can be expanded through deep learning. Emotion Recognition JetHexa is able to recognize facial features accurately to catch every nuance of expression. MediaPipe Development, Upgraded AI Interaction MediaPipe Development, Upgraded AI Interaction Based on MediaPipe framework, JetHexa can carry out human body tracking, hand detection, posture detection, overall detection, face detection, 3D detection and more. Based on MediaPipe framework, JetHexa can carry out human body tracking, hand detection, posture detection, overall detection, face detection, 3D detection and more. Fingertip Trajectory Control Fingertip Trajectory Control Human Posture Control Human Posture Control Gesture Recognition Gesture Recognition 3D Face Detection 3D Face Detection ROS Robot Operating System Global Popular Robotic Communication Framework Global Popular Robotic Communication Framework ROS is an open-source meta operating system for robots. It provides some basic services, such as hardware abstraction, low-level device control, implementation of commonly used functionality, message-passing between processes, and package management. And it also offers the tools and library functions needed to obtain, compile, write, and run code across computers. It aims at providing code reuse support for robotics research and development. ROS is an open-source meta operating system for robots. It provides some basic services, such as hardware abstraction, low-level device control, implementation of commonly used functionality, message-passing between processes, and package management.And it also offers the tools and library functions needed to obtain, compile, write, and run code across computers. It aims at providing code reuse support for robotics research and development. Gazebo Simulation Gazebo Simulation JetHexa employs ROS framework and supports Gazebo simulation. Gazebo brings a fresh approach for you to control JetHexa and verify the algorithm in simulated environment, which reduces experimental requirements and improves efficiency. JetHexa employs ROS framework and supports Gazebo simulation. Gazebo brings a fresh approach for you to control JetHexa and verify the algorithm in simulated environment, which reduces experimental requirements and improves efficiency. Body Control Simulation Verify the kinematics algorithm in simulation so as to avoid the damage to the robot due to the algorithm error. Visual Data Visual data is provided for the observation of the robot end and trajectory of the center of gravity to optimize the algorithm. Various Control Methods Various Control Methods WonderAi APP Map Nav APP (Android Only) PC Software Wireless Handle Product Structure EAI G4 Lidar Intelligent Serial Bus Servo Feature 35KG torque, high accuracy, data feedback, easy wiring, 12V supply voltage and strong power. OLED Display Display the controller properties and battery voltage in real time, and supports custom setting. Anodized Metal Bracket Robot’s metal bracket is finely anodized for delicate appearance and long service life. Monocular Camera/ Depth Camera Monocular Camera/ Depth Camera Either 8 megapixel wide-angle monocular Sony camera or Orbbec 3D binocular structured light depth camera can achieve multi-scenario high-accuracy AI recognition. And depth camera can realize 3D mapping and navigation. Multi-functional Expansion Board Multi-functional Expansion Board The onboard IMU sensor can detect the body posture in real time. There are 2-channel PWM servo port, 2 keys, 1 LED, 2 GPIO expansion interfaces and 2 IIC interfaces on the expansion board. JetHexa Parameter 3D Depth Camera Parameter 3D Depth Camera Parameter EAI G4 Lidar Parameter EAI G4 Lidar Parameter Monocular Camera Parameter Monocular Camera Parameter JetHexa Standard Kit JetHexa Advanced Kit Specifications Item Specification JetHexa Parameter Product weight: 2.5 kgMaterial: Full-metal hard aluminum alloy bracket (anodized)Monocular camera pan-tilt: 2 DOF Battery: 11.1V 3500mAh 5C Lipo batteryBattery life: 60minRobot DOF: 18DOFHardware: ROS controller and ROS expansion boardOperating system: Ubuntu 18.04 LTS + ROS MelodicSoftware: PC software + iOS/ Android APPCommunication method: USB/ Wi-Fi/ EthernetProgramming language: Python/ C/ C++/ JavaScriptStorage: 32GB TF cardServo: HX-35H itelligent serial bus servoControl method: Computer/ phone/ handle controlPackage size: 387*356*210mm(length*width*height)Weight (with package): 3.6 kg Battery Parameter Model: 11.1V 3500mAh 5C Lipo batteryCapacity: 3500mAhRated discharge current: 5CPlug: SM plug + DC femaleVoltage: 11.1VSize: 72*55*19 mmWeight: 159gCharger: 12.6V


What Is Artificial Intelligence (AI)

#artificialintelligence

Natural language processing (NLP) enables an intuitive form of communication between humans and intelligent systems using human languages. NLP drives modern interactive voice response (IVR) systems by processing language to improve communication. Chatbots are the most common application of NLP in business. Advanced virtual assistants, sometimes called conversational AI agents, are powered by conversational user interfaces, NLP, and semantic and deep learning techniques. Progressing beyond chatbots, advanced virtual assistants listen to and observe behaviors, build and maintain data models, and predict and recommend actions to assist people with and automate tasks that were previously only possible for humans to accomplish.


The Role of Symbolic AI and Machine Learning in Robotics

#artificialintelligence

Robotics is a multi-disciplinary field in computer science dedicated to the design and manufacture of robots, with applications in industries such as manufacturing, space exploration and defence. While the field has existed for over 50 years, recent advances such as the Spot and Atlas robots from Boston Dynamics are truly capturing the public's imagination as science fiction becomes reality. Traditionally, robotics has relied on machine learning/deep learning techniques such as object recognition. While this has led to huge advancements, the next frontier in robotics is to enable robots to operate in the real world autonomously, with as little human interaction as possible. Such autonomous robots differ to non-autonomous ones as they operate in an open world, with undefined rules, uncertain real-world observations, and an environment -- the real world -- which is constantly changing.


What is Artificial Intelligence? How does AI work, Types, Trends and Future of it?

#artificialintelligence

Let's take a detailed look. This is the most common form of AI that you'd find in the market now. These Artificial Intelligence systems are designed to solve one single problem and would be able to execute a single task really well. By definition, they have narrow capabilities, like recommending a product for an e-commerce user or predicting the weather. This is the only kind of Artificial Intelligence that exists today. They're able to come close to human functioning in very specific contexts, and even surpass them in many instances, but only excelling in very controlled environments with a limited set of parameters. AGI is still a theoretical concept. It's defined as AI which has a human-level of cognitive function, across a wide variety of domains such as language processing, image processing, computational functioning and reasoning and so on.


Artificial Intelligence Tutorial for Beginners

#artificialintelligence

This Artificial Intelligence tutorial provides basic and intermediate information on concepts of Artificial Intelligence. It is designed to help students and working professionals who are complete beginners. In this tutorial, our focus will be on artificial intelligence, if you wish to learn more about machine learning, you can check out this tutorial for complete beginners tutorial of Machine Learning. Through the course of this Artificial Intelligence tutorial, we will look at various concepts such as the meaning of artificial intelligence, the levels of AI, why AI is important, it's various applications, the future of artificial intelligence, and more. Usually, to work in the field of AI, you need to have a lot of experience. Thus, we will also discuss the various job profiles which are associated with artificial intelligence and will eventually help you to attain relevant experience. You don't need to be from a specific background before joining the field of AI as it is possible to learn and attain the skills needed. While the terms Data Science, Artificial Intelligence (AI) and Machine learning fall in the same domain and are connected, they have their specific applications and meaning. Simply put, artificial intelligence aims at enabling machines to execute reasoning by replicating human intelligence. Since the main objective of AI processes is to teach machines from experience, feeding the right information and self-correction is crucial. The answer to this question would depend on who you ask. A layman, with a fleeting understanding of technology, would link it to robots. If you ask about artificial intelligence to an AI researcher, (s)he would say that it's a set of algorithms that can produce results without having to be explicitly instructed to do so. Both of these answers are right.


Best usages of Artificial Intelligence in everyday life (2022) - Dataconomy

#artificialintelligence

There are so many great applications of Artificial Intelligence in daily life, by using machine learning and other techniques in the background. AI is everywhere in our lives, from reading our emails to receiving driving directions to obtaining music or movie suggestions. Don't be scared of AI jargon; we've created a detailed AI glossary for the most commonly used Artificial Intelligence terms and the basics of Artificial Intelligence. Now if you're ready, let's look at how we use AI in 2022. Artificial intelligence (AI) appears in popular culture most often as a group of intelligent robots bent on destroying humanity, or at the very least a stunning theme park. We're safe for now because machines with general artificial intelligence don't yet exist, and they aren't expected to anytime soon. You can learn the risk and benefits of Artificial Intelligence with this article.


Agent-Based Modeling for Predicting Pedestrian Trajectories Around an Autonomous Vehicle

Journal of Artificial Intelligence Research

This paper addresses modeling and simulating pedestrian trajectories when interacting with an autonomous vehicle in a shared space. Most pedestrian–vehicle interaction models are not suitable for predicting individual trajectories. Data-driven models yield accurate predictions but lack generalizability to new scenarios, usually do not run in real time and produce results that are poorly explainable. Current expert models do not deal with the diversity of possible pedestrian interactions with the vehicle in a shared space and lack microscopic validation. We propose an expert pedestrian model that combines the social force model and a new decision model for anticipating pedestrian–vehicle interactions. The proposed model integrates different observed pedestrian behaviors, as well as the behaviors of the social groups of pedestrians, in diverse interaction scenarios with a car. We calibrate the model by fitting the parameters values on a training set. We validate the model and evaluate its predictive potential through qualitative and quantitative comparisons with ground truth trajectories. The proposed model reproduces observed behaviors that have not been replicated by the social force model and outperforms the social force model at predicting pedestrian behavior around the vehicle on the used dataset. The model generates explainable and real-time trajectory predictions. Additional evaluation on a new dataset shows that the model generalizes well to new scenarios and can be applied to an autonomous vehicle embedded prediction.