Goto

Collaborating Authors

Machine Learning


What is time-series data, and why are we building a time-series database (TSDB)?

#artificialintelligence

Like all good superheroes, every company has its own origin story explaining why they were created and how they grew over time. This article covers the origin story of QuestDB and frames it with an introduction to time series databases to show where we sit in that landscape today. Time series is a succession of data points ordered by time. These data points could be a succession of events from an application's users, the state of CPU and memory usage over time, financial trades recorded every microsecond, or sensors from a car emitting data about the vehicle acceleration and velocity. For that reason, time-series is synonymous with large amounts of data.


JaidedAI/EasyOCR

#artificialintelligence

Ready-to-use OCR with 70 languages supported including Chinese, Japanese, Korean and Thai. We are currently supporting 70 languages. See list of supported languages. Note 1: for Windows, please install torch and torchvision first by following the official instruction here https://pytorch.org. On pytorch website, be sure to select the right CUDA version you have.


Language & Cognition: re-reading Jerry Fodor

#artificialintelligence

In my opinion the late Jerry Fodor was one of the most brilliant cognitive scientists (that I knew of), if you wanted to have a deep understanding of the major issues in cognition and the plausibility/implausibility of various cognitive architectures. Very few had the technical breadth and depth in tackling some of the biggest questions concerning the mind, language, computation, the nature of concepts, innateness, ontology, etc. The other day I felt like re-reading his Concepts -- Where Cognitive Science Went Wrong (I read this small monograph at least 10 times before, and I must say that I still do not comprehend everything that's in it fully). But, what did happen in the 11th reading of Concepts is this: I now have a new and deeper understanding of his Productivity, Systematicity and Compositionality arguments that should clearly put an end to any talk of connectionist architectures being a serious architecture for cognition -- by'connectionist architectures' I roughly mean also modern day'deep neural networks' (DNNs) that are essentially, if we strip out the advances in compute power, the same models that were the target of Fodor's onslaught. I have always understood the'gist' of his argument, but I believe I now have a deeper understanding -- and, in the process I am now more than I have ever been before, convinced that DNNs cannot be considered as serious models for high-level cognitive tasks (planning, reasoning, language understanding, problem solving, etc.) beyond being statistical pattern recognizers (although very good ones at that).


Paving The Way For Software 2.0 With Kotlin - Liwaiwai

#artificialintelligence

Our work with differentiable programming, which enables programs to optimize themselves, is part of Facebook AI's broader efforts to build more advanced tools for machine learning (ML) programming. That's why we're extending the Kotlin compiler to make differentiability a first-class feature of the Kotlin language, as well as developing a system for tensor typing. Our work enables developers to explore Software 2.0, where software essentially writes itself, via: By enabling intuitive and performant differentiable programming in Kotlin, we're empowering developers to create powerful, flexible programs that take advantage of problem structure while seamlessly maintaining type safety and keeping debugging simple. Today, most code is either learnable (written using restrictive machine learning libraries) or explicitly programmed (using traditional coding paradigms). A major obstacle toward achieving Software 2.0 is that there's no true compatibility between these two methods.


Viewpoint: Moore's law isn't broken - it's overheated

#artificialintelligence

Nick Harris, CEO and co-founder of US photonics computing specialist Lightmatter explains how advances in photonic computing technology could give Moore's Law a shot in the arm. Recent advancements in machine learning, computer vision, natural language processing, deep learning and more are already impacting life and humanity in ways seen and often unseen. This is especially true as it relates to artificial intelligence (AI). The demands of AI are growing at a blistering rate. Training AI models today requires ultra-high performance computer chips, leading to what one might refer to as a'space race' among top technology companies to build, acquire, or get exclusive access to the highest-performance chips as soon as they come to market.


Advancing Artificial Intelligence Research - Liwaiwai

#artificialintelligence

As part of a new collaboration to advance and support AI research, the MIT Stephen A. Schwarzman College of Computing and the Defense Science and Technology Agency in Singapore are awarding funding to 13 projects led by researchers within the college that target one or more of the following themes: trustworthy AI, enhancing human cognition in complex environments, and AI for everyone. The 13 research projects selected are highlighted below. Emerging machine learning technology has the potential to significantly help with and even fully automate many tasks that have confidently been entrusted only to humans so far. Leveraging recent advances in realistic graphics rendering, data modeling, and inference, Madry's team is building a radically new toolbox to fuel streamlined development and deployment of trustworthy machine learning solutions. In natural language technologies, most languages in the world are not richly annotated.


A Comprehensive Guide to Convolution Neural Network

#artificialintelligence

As we saw in the structure of CNN, convolution layers is used to extract the features and for extracting features it uses filters. So, let us discuss about how the features are extracted using filter now. In the above image we used various filters like Prewitt or Sobel and obtained the edges. For detail understanding about working on the images and extracting edges you can shoot up at my below blog for theoretical and practical implementation. Let us understand how filter operation basically works using an animated image.


Types of activation functions in Deep Learning

#artificialintelligence

There are various aspects of deep learning that we usually have to consider while making a deep learning model. Choosing the right number of layers, the activation function, number of epochs, loss function, the optimizer to name a few. I am revisiting these concepts for one of my projects so I decided to write about the different activation functions we use. So why do we even use the activation function and not just feed the summation directly to the next layer. The problem if we do this would be that the layers of the neural network wont be able to learn complex functions over time. The activation function adds non linearity to the model.


Python Language: Frontrunner in Shaping the Future of Machine Learning

#artificialintelligence

Machine Learning (ML) is rapidly changing the world of technology with its amazing features. Machine learning is slowly invading every part of our daily life starting from making appointments to checking calendar, playing music and displaying programmatic advertisements. The technology is so accurate that it predicts our needs even before we think about it. The opportunities and future in machine learning are very high. However, learning machine learning with Python programming has its own set of benefits.


Interview with Ionut Schiopu – ICIP 2020 award winner

AIHub

Ionut Schiopu and Adrian Munteanu received a Top Viewed Special Session Paper Award at the IEEE International Conference on Image Processing (ICIP 2020) for their paper "A study of prediction methods based on machine learning techniques for lossless image coding". Here, Ionut Schiopu tells us more about their work. The research topic of our paper is to introduce a more efficient algorithm for lossless image compression based on Machine Learning (ML) techniques, where the main objective is to minimize the amount of data required to represent the input image without any information loss. In recent years, a new research strategy for coding has emerged by exploring the advances brought by modern ML techniques by proposing novel hybrid coding solutions where specific modules in conventional coding frameworks are replaced with more efficient modules based on ML techniques. The paper follows this research strategy and uses a deep neural network to replace the prediction module in the conventional coding framework.