Geo-data firm Fugro collects and analyses information about the Earth and the structures built upon it. It surveys the land and in the case of mapping objects on the sea floor, Fugro uses side scan sonar, collected via boats, to gather information. One project sees Fugro search the sea for boulders to help its customers determine whether they can set up an offshore windfarm. "Windfarm companies want to know where the impediments and where the potential sites they can build windfarms are," Fugro senior innovation engineer Marcus Nepveaux said, speaking at AWS re:Invent in Las Vegas. "So we go in, we map the sea floor for them, tell them where the big rocks or the little rocks are … they may be as small as a foot, and as big as we can detect."
James Loy has more than five years, expert experience in data science in the finance and healthcare industries. He has worked with the largest bank in Singapore to drive innovation and improve customer loyalty through predictive analytics. He has also experience in the healthcare sector, where he applied data analytics to improve decision-making in hospitals. He has a master's degree in computer science from Georgia Tech, with a specialization in machine learning. His research interest includes deep learning and applied machine learning, as well as developing computer-vision-based AI agents for automation in industry.
It is becoming increasingly clear that for most working people, a proportion of the working tasks they currently perform will be either completely replaced by machines (AI if the tasks are cognitive, robots if they are manual) or augmented by a human-machine interface. While there is less clarity about the types of tasks that will remain within the human domain, we can make some predictions. We know that, right now and in the foreseeable future, machines are generally poor at understanding a person's mood, at sensing the situation around them, and at developing trusting relationships. So as the World Economic Forum report on future skills argued, it is human "soft skills" that will become increasingly valuable -- skills such as empathy, context sensing, collaboration, and creative thinking. That means that millions of people across the world will have to make the transition toward becoming a great deal better versed in these soft skills.
Enterprises are creating more and more videos and using them for various informational purposes, including marketing, training of customers, partners & employees and internal communications. However, videos are considered as the blackholes of the internet because it is very hard to see what's inside them. The opaque nature of videos equally impacts end users who spend a lot of time navigating to their point of interest, leading to severe underutilization of videos as a powerful medium of information. In this talk, we will describe visual processing pipeline of VideoKen platform which includes Graph-based algorithm along with deep scene text detection to identify key visual frames in the video, FCN-based algorithm for semantic segmentation of screen content in visual frames, Transfer-learning based visual classifier to categorize screen content into different categories such as slides, code walkthrough, demo, handwritten, etc. and Algorithm to detect visual coherency and select indices from the video. We will discuss challenges and experiences in implementing/iterating on these algorithms using our experience with processing 100K video hours of content.
You browse an e-commerce site on your mobile device, looking for a pair of shoes. Then, with every swipe on your phone, you see ads from other retailers offering you shoes, shoes and more shoes. Are you flattered that the retailer shared your session cookie with third parties? Or do you shake your head, annoyed that these ads are following you everywhere? You visit an online retailer and can't find what you're looking for.
"I really do think [nbdev] is a huge step forward for programming environments": Chris Lattner, inventor of Swift, LLVM, and Swift Playgrounds. It is a Python programming environment called nbdev, which allows you to create complete python packages, including tests and a rich documentation system, all in Jupyter Notebooks. We've already written a large programming library (fastai v2) using nbdev, as well as a range of smaller projects. Nbdev is a system for something that we call exploratory programming. Exploratory programming is based on the observation that most of us spend most of our time as coders exploring and experimenting.
Mumbai: The financial sector in India is driving investments into chatbots and artificial intelligence (AI) to augment customer service, but bankers are convinced that there would not be job losses as these new tools will only complement staff. When it comes to AI it is not upstarts but big guns of banking with resources, which are driving investments. State Bank of India (SBI) is working with IBM to make use of Watson -- an answering computer software to assist staff and employees. HDFC Bank has tied up with artificial intelligence firm Niki (funded by Ratan Tata and Ronnie Screwvala) to bring in conversational banking. Last week, Yes Bank partnered Payjo to launch AI-led digital initiatives.
BERT is one of the most popular algorithms in the NLP spectrum known for producing state-of-the-art results in a variety of language modeling tasks. Built on top of transformers and seq-to-sequence models, the Bidirectional Encoder Representations from Transformers is a very powerful NLP model that has outperformed many. The state-of-the-art results that it produces on a variety of language-specific tasks are enough to show that it is indeed a big deal. The results come from its underlying architecture which uses breakthrough techniques such as seq2seq (sequence-to-sequence) models and transformers. The seq2seq model is a network that converts a given sequence of words into a different sequence and is capable of relating the words that seem more important.
Sophisticated machine learning applications require not only enormous amounts of training data, but powerful computer hardware on which to train. An analysis conducted by San Francisco research firm OpenAI found that since 2012, the amount of compute used in the largest training runs has been increasing exponentially with a 3.4-month doubling time, and that it's grown by more than 300,000 times over that same time period. The trend spurred the development of supercomputers like the U.S. Department of Energy's Sierra and Summit, which leverage dedicated accelerator chips to speed up AI computation. Now, IBM's Hardware Center, in collaboration with New York State, SUNY Polytechnic Institute, and other members of IBM's AI Hardware Center, has delivered a new machine for the Department of Computer Science at Rensselaer Polytechnic Institute (RPI) that's optimized for state-of-the-art machine learning workloads. It's dubbed Artificial Intelligence Multiprocessing Optimized System, or AiMOS (in honor of Rensselaer cofounder Amos Eaton), and it will principally tackle projects in biology, chemistry, the humanities, and related domains underway at the new IBM Research AI Hardware Center on the SUNY campus in Albany.
These days it seems that nearly every product and startup boasts some kind of A.I. capability, but when it comes to advancing this domain beyond simplistic machine learning technologists at MIT Technology Review's Future Compute conference say these A.I. will need to be more human than not. When discussing A.I. during the conference's first day on December 2nd, speakers focused on two distinct paths for this technology: more human-like A.I.'s as well as more computer-like humans. This dual approach was presented as a potential future for human-machine symbiosis. But what exactly does that all mean, and is it even a good thing? A research Scientist from Oak Ridge National Laboratory, Catherine Schuman began the conversation by presenting her work on neuromorphic computing.