Goto

Collaborating Authors

compute


Why Traditional ETL Tools Are Less Relevant Today

#artificialintelligence

Data has been the primary reason why computers & Information Technology evolved. In the modern age Data is the key ingredient that is driving most businesses. The data storage & processing has come of an age. I started my data journey almost two decades ago working on the traditional BI and ETL tools. However over the last few years there has been a major shift from these concepts & technologies.


Automated Quantification of CT Patterns Associated with COVID-19 from Chest CT

#artificialintelligence

To present a method that automatically segments and quantifies abnormal CT patterns commonly present in coronavirus disease 2019 (COVID-19), namely ground glass opacities and consolidations. In this retrospective study, the proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions, based on a dataset of 9749 chest CT volumes. The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities, based on deep learning and deep reinforcement learning. The first measure of (PO, PHO) is global, while the second of (LSS, LHOS) is lobe-wise. Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States collected between 2002-Present (April 2020).


Neural Nets Aren't Black Boxes

#artificialintelligence

If you think neural nets are black boxes, you're certainly not alone. While they may not be as interpretable as something like a random forest (at least not yet), we can still understand how they process data to arrive at their predictions. In this post we'll do just that as we build our own network from scratch, starting with logistic regression. If you think neural nets are black boxes, you're certainly not alone. While they may not be as interpretable as something like a random forest (at least not yet), we can still understand how they process data to arrive at their predictions.


Implement a Neural Network from Scratch with NumPy

#artificialintelligence

I think that the best way to really understand how a neural network works is to implement one from scratch. That is exactly what I going to do through this article. I will create a neural network class, and I want to design it in such a way to be more flexible. I do not want to hardcode in it a specific activation or loss functions, or optimizers (that is SGD, Adam, or other gradient-based methods). I will design it to receive these from outside the class so that one can just take the class's code and pass to it whatever activation/loss/optimizer he wants.


Recurrent Neural Networks -- Part 1

#artificialintelligence

These are the lecture notes for FAU's YouTube Lecture "Deep Learning". This is a full transcript of the lecture video & matching slides. We hope, you enjoy this as much as the videos. Of course, this transcript was created with deep learning techniques largely automatically and only minor manual modifications were performed. If you spot mistakes, please let us know!


How do Neural Networks learn?

#artificialintelligence

Neural networks are, with no doubt, the most popular machine learning technique that is used nowadays. So, I think it is worth understanding how they actually learn. If we represent the input and output values of each layer as vectors, the weights as matrices, and biases as vectors, then we get the above-flattened view of a neural network which is just a sequence of vector function applications. That is, functions that take vectors as input, do some transformation on them, and they output other vectors. In the image above, each line represents a function, which can be either a matrix multiplication plus a bias vector, or an activation function.


Let's build a Full-Text Search engine - Artem Krylysov

#artificialintelligence

Full-Text Search is one of those tools people use every day without realizing it. If you ever googled "golang coverage report" or tried to find "indoor wireless camera" on an e-commerce website, you used some kind of full-text search. Full-Text Search (FTS) is a technique for searching text in a collection of documents. A document can refer to a web page, a newspaper article, an email message, or any structured text. Today we are going to build our own FTS engine.


Voice tech: The past, present, and future

#artificialintelligence

Executives from NVIDIA, Deepgram, and Sharpen gathered via Zoom on Wednesday to discuss the current state of the voice tech industry, as well as where it's going. Growth in artificial intelligence (AI) technology and machine learning have had a huge hand in lifting the market, but it's only the beginning. Voice tech has seen rapid growth in recent years and isn't predicted to stop: The market is estimated to be worth nearly $32 billion by 2025, a Grand View Research report found. With smart speakers and home assistants like Amazon Alexa, Apple's Siri, and Google Assistant making voice tech mainstream, most consumers are familiar with the concept. However, the technology is more complex than people may think and it has come a long way.


Reinforcement Learning Starts to Deliver on Its Promise

#artificialintelligence

Summary: Advances in very low cost compute and Model Based Reinforcement Learning make this modeling technique that much closer to adoption in the practical world. We keep asking if this is the year for reinforcement learning (RL) to finally make good on its many promises. Like flying cars and jet packs the answer always seems to be at least a couple of years away. If your history with data science goes back to late-aughts you may remember a time when there were only two basic types of models, supervised and unsupervised. Then, quite overnight, reinforcement learning was added as a third leg to this new stool.


Council Post: Want To Measure Your Enterprise AI Initiatives? Start With Model Debt (Part 2 Of 2)

#artificialintelligence

In part one of this discussion, I presented the basic concept of model debt as a way to measure the effectiveness of individual models and AI programs overall. In part two, I'll go through a short example to show how model debt can be computed in practice. The target production days (TPDs), which is a count of the number of days that the model is intended to be in production over its full life cycle, starting from when the data science team releases it for production. The shorter the lock-to-load time, the faster the model can contribute to the business. The actual lock-to-load time will depend on how effectively the ModelOps process moves the model through its life cycle steps as defined by the enterprise AI architect, including technical checks (e.g., security scans, performance verification, etc.), governance requirements (e.g., regulatory compliance, explainability reports, etc.) and business considerations (e.g., agreement on KPIs, departmental sign-offs, etc.).