HALIFAX, NS, Aug. 4, 2020 /CNW/ - Global Spatial Technology Solutions ("GSTS" or "the Company") an Artificial Intelligence (AI) and Maritime Analytics company today announced that it has been selected by the Canadian Space Agency (CSA) to develop space-based AI capability to support enhanced decision-making for a range of space applications focused on tasks using computer vision (such as would be used by exploration landers, rovers, robotics or Earth observation systems). This project is funded under the Space Technology Development Program. "This contribution will enable GSTS to expand our growing AI capabilities into the space sector to support decision making based on the same techniques we utilize in the maritime domain, enabling detection, recognition and prediction," said Richard Kolacz, GSTS CEO. "It is equivalent to placing the brain next to the eyes of any space asset or sensor in order to support decision-making locally, rather than having to relay all the data to Earth for analysis before a decision can be made. It is the first step in the development of truly autonomous space capability." Computer vision involves the automatic extraction, analysis and understanding of information gleaned from digital images. By applying machine learning, which is a type of AI, it can enhance and optimize the production of actionable insights much faster and more accurately than a human can.
The 2.2M parameters in MobileNet are frozen, but there are 1.3K trainable parameters in the dense layers. You need to apply the sigmoid activation function in the final neurons to ouput a probability score for each genre apart. By doing so, you are relying on multiple logistic regressions to train simultaneously inside the same model. Every final neuron will act as a seperate binary classifier for one single class, even though the features extracted are common to all final neurons. When generating predictions with this model, you should expect an independant probability score for each genre and that all probability scores do not necessarily sum up to 1. This is different from using a softmax layer in multi-class classification where the sum of probability scores in the output is equal to 1.
There are many photos of Tom Hanks, but none like the images of the leading everyman shown at the Black Hat computer security conference Wednesday: They were made by machine learning algorithms, not a camera. Philip Tully, a data scientist at security company FireEye, generated the hoax Hankses to test how easily open source software from artificial intelligence labs could be adapted to misinformation campaigns. His conclusion: "People with not a lot of experience can take these machine learning models and do pretty powerful things with them," he says. Seen at full resolution, FireEye's fake Hanks images have flaws like unnatural neck folds and skin textures. But they accurately reproduce the familiar details of the actor's face like his brow furrows and green-gray eyes, which gaze cooly at the viewer.
Today's newsletter comes with a more accurate prediction of the big Samsung event -- even if there's probably already another Galaxy device leaked before it starts -- and 100 percent more working links. After all the teases and photos, there shouldn't be many surprises, but if you want to know exactly what the next Galaxy Fold and Galaxy Note are like, then you'll find out in a few hours. With 57.5 million customers from Disney, 8.5 million from ESPN (up from 2.5 million a year ago) and 35.5 million from Hulu (up from 27.9 million), Disney now counts over 100 million direct customers. However, it's bringing in less money per user than other streamers, due to discounts, all while the pandemic has closed movie theaters and kept people away from theme parks. Disney did manage a hit when it released Hamilton direct to Disney, and it's following up with something bigger.
A Seoul National University Master's student and developer has trained a face generating model to transfer normal face photographs into cartoon images in the distinctive style of Lee Mal-nyeon. The student (GitHub user name: bryandlee) used webcomics images by South Korean cartoonist Lee Mal-nyeon (이말년) as input data, building a dataset of malnyun cartoon faces then testing popular deep generative models on it. By combining a pretrained face generating model with special training techniques, they were able to train a generator at 256 256 resolution in just 10 hours on a single RTX 2080ti GPU, using only 500 manually annotated images. Since the cascade classifier for human faces provided in OpenCV-- a library of programming functions mainly aimed at real-time computer vision -- did not work well on the cartoon domain, the student manually annotated 500 input cartoon face images. The student incorporated FreezeD, a simple yet effective baseline for transfer learning of GANs proposed earlier this year by KAIST (Korea Advanced Institute of Science and Technology) and POSTECH ( Pohang University of Science and Technology) researchers to reduce the burden of heavy data and computational resources when training GANs. The developer tested the idea of freezing the early layers of the generator in transfer learning settings on the proposed FreezeG (freezing generator) and found that "it worked pretty well."