Nix has upgraded their already awesome color sensors — and they're on sale


We've posted about the Nix Mini Color Sensor before: a pocket-sized device that will tell you the color of any physical object. If you've ever struggled to match the color of a real-world object to the color of a digital object (or vice-versa), then you'll quickly realize just how revolutionary this tool is. Nix has recently released a new and improved version – the Nix Pro Color Sensor – that takes everything that was great about the Nix Mini Color Sensor and adds several game-changing new features. While the original Nix limited you to a handful of pre-existing color libraries, the Nix Pro lets you import your own custom color libraries and use those for color matching. Whether you're a lab professional, a color consultant, or just an all-purpose color nerd, this will significantly improve your color game.

Robot Arm, Chess Computer Vision - Daniel's Blog


The game of chess is one of the world's most popular two-player board games. I often times find myself wanting to play even when no one is around to play. One solution to this problem is to play chess on a computer or mobile device against. However, many people would agree with me in thinking that playing a virtual game of chess is a completely different experience than playing a physical game of chess. For this reason, I intend to use this project as an opportunity to build a 6 degree of freedom robotic arm that will take the place of an opponent in a physical game of Chess.

Color Sails: Discrete-Continuous Palettes for Deep Color Exploration Artificial Intelligence

We present color sails, a discrete-continuous color gamut representation that extends the color gradient analogy to three dimensions and allows interactive control of the color blending behavior. Our representation models a wide variety of color distributions in a compact manner, and lends itself to applications such as color exploration for graphic design, illustration and similar fields. We propose a Neural Network that can fit a color sail to any image. Then, the user can adjust color sail parameters to change the base colors, their blending behavior and the number of colors, exploring a wide range of options for the original design. In addition, we propose a Deep Learning model that learns to automatically segment an image into color-compatible alpha masks, each equipped with its own color sail. This allows targeted color exploration by either editing their corresponding color sails or using standard software packages. Our model is trained on a custom diverse dataset of art and design. We provide both quantitative evaluations, and a user study, demonstrating the effectiveness of color sail interaction. Interactive demos are available at

Fast Color Image Segmentation Using Commodity Hardware

AAAI Conferences

James Bruce Tucker Balch Manuela Veloso School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract Vision systems employing region segmentation by color are crucial in applications such as object tracking, automated manufacturing and mobile robotics. Traditionally, systems employing realtime color-based segmentation are either implemented in hardware, or as very specific software systems that take advantage of domain knowledge to attain the necessary efficiency. However, we have found that with careful attention to algorithm efficiency fast color image segmentation can be accomplished using commodity image capture and CPU hardware. This paper describes a system capable of tracking several hundred regions of up to 32 colors at 30 Hertz on general purpose commodity hardware. The software system is composed of three main parts; a color threshold classifier, a region merger to calculate connected components, and a separation and sorting system to gather various region features and sort them by size. The algorithms and representations will be described, as well as descriptions of three applications in which it has been used.

Exploring the Performance of the iRobot Create for Object Relocation in Outer Space

AAAI Conferences

This research explores the performance of the iRobot Create machine for optimizing object relocation in an outer space environment. It is an ultimate goal to have it become a symbol of innovation for robots that are sent into outer space. Functioning as a tool-bot, and an active assistant, this robot aims to assist in small duties and respond to commands. With its arm and color blob recognition capabilities, this robot has the potential to receive a request, register and associate it with existing objects in its line of sight, and maneuver the arm to act accordingly, grabbing the correct object and giving it to a worker or engineer. This poster and presentation explains current progress and implementation of the iRobot Create for this purpose.