CV is a nascent market but it contains a plethora of both big technology companies and disruptors. Technology players with large sets of visual data are leading the pack in CV, with Chinese and US tech giants dominating each segment of the value chain. Google has been at the forefront of CV applications since 2012. Over the years the company has hired several ML experts. In 2014 it acquired the deep learning start-up DeepMind. Google's biggest asset is its wealth of customer data provided by their search business and YouTube.
That year, numerous experienced computer chip designers set out on their own to design novel kinds of parts to improve the performance of artificial intelligence. It's taken a few years, but the world is finally seeing what those young hopefuls have been working on. The new chips coming out suggest, as ZDNet has reported in past, that AI is totally changing the nature of computing. It also suggests that changes in computing are going to have an effect on how artificial intelligence programs, such as deep learning neural networks, are designed. Case in point, startup Tenstorrent, founded in 2016 and headquartered in Toronto, Canada, on Thursday unveiled its first chip, "Grayskull," at a microprocessor conference run by the legendary computer chip analysis firm The Linley Group.
This opinion piece is inspired by the old Danish proverb: "Making predictions is hard, especially about the future" (1). As every reader knows, the momentum of artificial intelligence (AI) and the eventual implementation of deep learning models seem assured. Some pundits have gone considerably further, however, and predicted a sweeping AI takeover of radiology. Although many radiologists support AI and believe it will enable greater efficiency, a recent study of medical students found very different reactions (2). While such doomsday predictions are understandably attention-grabbing, they are highly unlikely, at least in the short term.
Yesterday, AIM published an article on how difficult it is for the small labs and individual researchers to persevere in the high compute, high-cost industry of deep learning. Today, the policymakers of the US have introduced a new bill that will ensure deep learning is affordable for all. The National AI Research Resource Task Force Act was introduced in the House by Representatives Anna G. Eshoo (D-CA) and her colleagues. This bill was met with unanimous support from the top universities and companies, which are engaged in artificial intelligence (AI) research. Some of the well-known supporters include Stanford University, Princeton University, UCLA, Carnegie Mellon University, Johns Hopkins University, OpenAI, Mozilla, Google, Amazon Web Services, Microsoft, IBM and NVIDIA amongst others.
Whether or not your organisation suffers a cyber attack has long been considered a case of'when, not if', with cyber attacks having a huge impact on organisations. In 2018, 2.8 billion consumer data records were exposed in 342 breaches, ranging from credential stuffing to ransomware, at an estimated cost of more than $654bn. In 2019, this had increased to an exposure of 4.1 billion records. While the use of artificial intelligence (AI) and machine learning as a primary offensive tool in cyber attacks is not yet mainstream, its use and capabilities are growing and becoming more sophisticated. In time, cyber criminals will, inevitably, take advantage of AI, and such a move will increase threats to digital security and increase the volume and sophistication of cyber attacks.
Uncovering evidence for historical theories and identifying patterns in past events has long been hindered by the labour-intensive process of inputting data from artefacts and handwritten records. The adoption of artificial intelligence and machine learning techniques is speeding up such research and drawing attention to overlooked information. But this approach, known as "digital humanities", is in a battle for funding against more future-focused applications of AI. "There is a lot of interest in digital humanities, but there is not a lot of money," says Ilan Shimshoni, professor of computer vision and machine learning at the University of Haifa in Israel, where he works on archaeological projects that include reassembling artefacts from photos of fragments. "If you want to do an analysis of Facebook you'll get much more money than if you want to look at ancient Greek artefacts." Archaeological puzzles may not seem as urgent as computer science projects in healthcare, finance and other industries, but applying algorithmic techniques to historical research can improve AI's capabilities, says Ayellet Tal, an archaeological and computer science researcher at Israel's Technion University.
Dow and S&P 500 are currently on track to close out the second quarter positive on the year. The markets remained flat to slightly higher this morning, during the last trading day of the quarter. Investors remain cautious amidst concerns of mixed economic data and a looming threat from the coronavirus pandemic. All eyes will be set on Federal Reserve Chairman Jerome Powell and Treasury Secretary Steve Mnuchin as they look to testify before the House Financial Services Committee. Amidst this uncertainty, our deep learning algorithms have parsed through the data and used Artificial Intelligence ("AI") to help you spot the Top Buys for today.
One of the many benefits of using artificial intelligence (AI) is to help us view societal problems from a different perspective. While there's been much hubbub about how AI might be misused, we must not overlook the many ways AI can be used for good. Our global issues are complex, and AI provides us with a valuable tool to augment human efforts to come up with solutions to vexing problems. Here are 10 of the best ways artificial intelligence is used for good. Artificial intelligence, powered by deep-learning algorithms, is already in use in healthcare.
What makes us humans so good at making sense of visual data? That's a question that has preoccupied artificial intelligence and computer vision scientists for decades. Efforts at reproducing the capabilities of human vision have so far yielded results that are commendable but still leave much to be desired. Our current artificial intelligence algorithms can detect objects in images with remarkable accuracy, but only after they've seen many (thousands or maybe millions) examples and only if the new images are not too different from what they've seen before. There is a range of efforts aimed at solving the shallowness and brittleness of deep learning, the main AI algorithm used in computer vision today. But sometimes, finding the right solution is predicated on asking the right questions and formulating the problem in the right way.