Goto

Collaborating Authors

 hackster


Porting Deep Learning Models to Embedded Systems: A Solved Challenge - Hackster.io

#artificialintelligence

The past few years have seen an explosion in the use of artificial intelligence on embedded and edge devices. Starting with the keyword spotting models that wake up the digital assistants built into every modern cellphone, "edge AI" products have made major inroads into our homes, wearable devices, and industrial settings. They represent the application of machine learning to a new computational context. ML practitioners are the champions at building datasets, experimenting with different model architectures, and building best-in-class models. ML experts also understand the potential of machine learning to transform the way that humans and technology work together.


Picture Perfect - Hackster.io

#artificialintelligence

As machine learning algorithms continue to advance, the need for good, accurately annotated datasets is becoming increasingly apparent. With less and less room for optimization of the models themselves, more attention is finally being turned to addressing issues with data quality. After all, no matter how much potential a particular model has, that potential cannot be realized without a good dataset to learn from. Image classification is a common task for machine learning models, and these models suffer from a particular type of data problem called co-occurrence bias. Co-occurrence bias can cause irrelevant details to get the attention of a machine learning model, leading to incorrect predictions. For example, if a dataset used to train an object recognition model only contains images of boats in the ocean, the model may start classifying anything related to the ocean, such as beaches or waves, as boats.


Facial Recognition and Tracking Project with mechArm - Hackster.io

#artificialintelligence

As mentioned earlier, the computing power of the Raspberry Pi is insufficient, using other control boards, such as Jetson Nano (600MHZ) or high-performance image processing computers, would run smoother. Also, in the movement control module, because we did not do hand-eye calibration, only relative displacement can be used. The control is divided into "sampling stage" and "movement stage". Currently, it is preferable to require the lens to be stationary during sampling, but it is difficult to ensure that the lens is stationary, resulting in deviation in the coordinates when the lens is also moving during sampling. Finally, I would like to specially thank Elephant Robotics for their help during the development of the project, which made it possible to complete it. The MechArm used in this project is a centrally symmetrical structured robotic arm with limitations in its joint movement. If the program is applied to a more flexible myCobot, the situation may be different. If you have any questions about the project, please leave me a message below.


AI Conversation Speaker aka Friend Bot: Part 2 Wake Word - Hackster.io

#artificialintelligence

The Conversational Speaker, informally known as "Friend Bot", uses a Raspberry Pi to enable a spoken conversation with OpenAI large language models. This implementation waits for a wake phrase, listens to speech, processes the conversation through the OpenAI service, and responds back. For more information on the prompt engine used for maintaining conversation context, go here: python,typescript,dotnet. This project is written in.NET 6 which supports Raspberry Pi OS, Linux, macOS, and Windows. The code base has a default wake word (i.e., "Hey, Computer.")


Raspberry Pi Pico machine learning inference tutorial

#artificialintelligence

If you are interested in learning more about machine learning inference on the recently launched Raspberry Pi Pico microcontroller, you may be interested in a new project published to the Hackster.io Classed as an intermediate skill level project and taking approximately 60 minutes, Maslov covers the basics of setting up a Seeed Grove Shield for Pi Pico v1.0 and Edge Impulse. Edge Impulse is a platform that enables developers to easily train and deploy deep learning models on embedded devices. Check out the video below to learn more. "This is another article in know-how series, which focuses solely on a specific feature or technique and today Iíll tell you how to use neural network trained with Edge Impulse with new Raspberry Pico 2040. Also make sure to watch the tutorial video with step-by-step instructions."


Raspberry Pi machine learning with TensorFlow Lite - Geeky Gadgets

#artificialintelligence

If you are interested in learning more about how you can use your Raspberry Pi and machine learning to expand your projects, you may be interested in a new tutorial published to the Hackster.io The tutorial takes approximately four hours to complete and has been classed as a big skill level build using a Raspberry Pi 4 Model B mini PC. Check out the video below for an introduction into the proof of concept tutorial. "TensorFlow Lite allows you to take the same ML models used with TensorFlow (with some tweaks) and deploy them in mobile and IoT edge computing scenarios. There are obvious downsides with minimal compute power and less accurate results. However, what you can accomplish with a tiny processor sipping tiny amounts of power is still quite staggering."


Can artificial intelligence give elephants a winning edge?

#artificialintelligence

BEGIN ARTICLE PREVIEW: Adam Benzion Contributor Adam Benzion is a serial entrepreneur, writer, tech investor, co-founder of Hackster.io and the CXO of Edge Impulse. Images of elephants roaming the African plains are imprinted on all of our minds and something easily recognized as a symbol of Africa. But the future of elephants today is uncertain. An elephant is currently being killed by poachers every 15 minutes, and humans, who love watching them so much, have declared war on their species. Most people are not poachers, ivory collectors or intentionally harming wildlife, but silence or indifference to the battle at hand is as deadly. You can choose to read this article, feel bad for a moment and then move on to your next email and start your day. Or, perhaps you will pause and think: Our opportunities to help save wildlife, especially elephants, are right in front of us and grow every day. And some of these opportunities are


DIY Raspberry Pi face recognition system - Geeky Gadgets

#artificialintelligence

Raspberry Pi enthusiasts interested in creating their very own face recognition system using a Raspberry Pi 3 combined with the Raspberry Pi camera module and a Seeed Grove Relay, Seeed LTE Cat 1 Pi HAT (Europe) and finished with a 5 inch HDMI display complete with USB touchscreen. May be interested in the Raspberry Pi facial recognition system and smart lock with LTE Pi HAT created by the team at Seeed and published to the Hackster.io "In this project, we plan to take pictures with picamera and recognise faces in them, then display recognition result in the screen. If faces known, open the door, and send who opened the door to specified phone number via SMS. So you need to connect a camera to Raspberry Pi's camera interface, and install antenna and Grove – Relay to LTE Pi hat, then plug HAT to your Pi. Screen can be connected to Raspberry Pi via a HDMI cable, don't forget connect power to your screen and Pi."


Posture Recognition using Kinect, Azure IoT, ML and WebVR

#artificialintelligence

With the recent success of depth cameras such as Kinect, gesture and posture recognition has become easier. Using depth sensors, the 3D locations of body-joints can be reliably extracted to used with any machine learning framework, specific gestures or posture can be modelled and inferred. Real world applications in Virtual Reality can be used for Yoga, Ballet training, Golf, anything related to activity recognition and proper postures. I also see application of it in the Architectural, Engineering, Construction and Manufacturing Industry by sending depth sensor data to the cloud to identify correct configurations. This is a proof of concept to detect pose "Y", "M", "C", "A" and stream the result back to the browser. This video explains a littlebit how it's hooked up together.


Posture Recognition using Kinect, Azure IoT, ML and WebVR

#artificialintelligence

With the recent success of depth cameras such as Kinect, gesture and posture recognition has become easier. Using depth sensors, the 3D locations of body-joints can be reliably extracted to used with any machine learning framework, specific gestures or posture can be modelled and inferred. Real world applications in Virtual Reality can be used for Yoga, Ballet training, Golf, anything related to activity recognition and proper postures. I also see application of it in the Architectural, Engineering, Construction and Manufacturing Industry by sending depth sensor data to the cloud. This is a proof of concept to detect pose "Y", "M", "C", "A" and stream the result back to the browser. This video explains a littlebit how it's hooked up together.