We present color sails, a discrete-continuous color gamut representation that extends the color gradient analogy to three dimensions and allows interactive control of the color blending behavior. Our representation models a wide variety of color distributions in a compact manner, and lends itself to applications such as color exploration for graphic design, illustration and similar fields. We propose a Neural Network that can fit a color sail to any image. Then, the user can adjust color sail parameters to change the base colors, their blending behavior and the number of colors, exploring a wide range of options for the original design. In addition, we propose a Deep Learning model that learns to automatically segment an image into color-compatible alpha masks, each equipped with its own color sail. This allows targeted color exploration by either editing their corresponding color sails or using standard software packages. Our model is trained on a custom diverse dataset of art and design. We provide both quantitative evaluations, and a user study, demonstrating the effectiveness of color sail interaction. Interactive demos are available at www.colorsails.com.
The idea is to give a grasp on some concepts that are necessary to understand what comes next without being too much detailed as a more detailed explanation is out of the scope of this post. Feel free to skip these parts if you already know what they're talking about. As previously anticipated a color can be represented as a point in an n-dimensional space called color space. Most commonly the space is 3-dimensional and the coordinates in that space can be used to encode a color. There are many color spaces for different purposes and with different gamut (range of colors), and in each of them it is possibile to define a distance metric that quantifies the color difference. The most common and easiest distance metric used is the Euclidean distance which is used in RGB and Lab color spaces. The RGB (abbreviation of red-green-blue) color space is by far the most common and used color space. The idea is that it is possibile to create colors by combining red, green and blue. A color in RGB is usually encoded as a 3-tuple of 8 bits each, hence each dimension takes a value within the range [0, 255] where 0 stands for absence of color while 255 stands for full presence of color.
James Bruce Tucker Balch Manuela Veloso School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract Vision systems employing region segmentation by color are crucial in applications such as object tracking, automated manufacturing and mobile robotics. Traditionally, systems employing realtime color-based segmentation are either implemented in hardware, or as very specific software systems that take advantage of domain knowledge to attain the necessary efficiency. However, we have found that with careful attention to algorithm efficiency fast color image segmentation can be accomplished using commodity image capture and CPU hardware. This paper describes a system capable of tracking several hundred regions of up to 32 colors at 30 Hertz on general purpose commodity hardware. The software system is composed of three main parts; a color threshold classifier, a region merger to calculate connected components, and a separation and sorting system to gather various region features and sort them by size. The algorithms and representations will be described, as well as descriptions of three applications in which it has been used.
Electronics and Communication, Kyoto University, Yoshida, Kyoto, Japan wat email@example.com, firstname.lastname@example.org Abstract Cooperative use of pattern information and natural language information is quite effective for sophisticated and flexible information processing. Therefore, it is important to investigate the integration of these kinds of information (especially in multimedia). For this purpose, we propose method for image analysis by using the natural language information extracted from the explanation text of image data. First, we describe the method of extraction of color information from the explanation text. Then, we describe how this color information is used for the extraction of objects from the image data. We report experimental results to show the effectiveness of our method. Also, we report an experimental multimedia database system for pictorial book of flora which we developed by using the results of the experiment.
Confused about renovating your space? Choosing the perfect color for your walls is always a challenging task. One does rounds of color consultation and several patch tests. This paper proposes an AI tool to pitch paint based on attributes of your room and other furniture, and visualize it on your walls. It makes the color selection process a whole lot easier. It basically takes in images of a room, detects furniture objects using YOLO object detection. Once these objects have been detected, the tool picks out color of the object. Later this object specific information gets appended to the room attributes (room_type, room_size, preferred_tone, etc) and a deep neural net is trained to make predictions for color/texture/wallpaper for the walls. Finally, these predictions are visualized on the walls from the images provided. The idea is to take the knowledge of a color consultant and pitch colors that suit the walls and provide a good contrast with the furniture and harmonize with different colors in the room. Transfer learning for YOLO object detection from the COCO dataset was used as a starting point and the weights were later fine-tuned by training on additional images. The model was trained on 1000 records listing the room and furniture attributes, to predict colors. Given the room image, this method finds the best color scheme for the walls. These predictions are then visualized on the walls in the image using image segmentation. The results are visually appealing and automatically enhance the color look-and-feel.