neural network




10 New Things I Learnt from fast.ai Course V3

#artificialintelligence

Everyone's talking about the fast.ai Massive Open Online Course (MOOC) so I decided to have a go at their 2019 deep learning course Practical Deep Learning for Coders, v3. I've always known some deep learning concepts/ideas (I've been in this field for about a year now, dealing mostly with computer vision), but never really understood some intuitions or explanations. I also understand that Jeremy Howard, Rachel Thomas and Sylvain Gugger (follow them on Twitter!) are influential people in the deep learning sphere (Jeremy has a lot of experience with Kaggle competitions), so I hope to gain new insights and intuitions, and some tips and tricks for model training from them. I have so much to learn from these folks.


I think, therefore I code

#artificialintelligence

To most of us, a 3-D-printed turtle just looks like a turtle; four legs, patterned skin, and a shell. But if you show it to a particular computer in a certain way, that object's not a turtle -- it's a gun. Objects or images that can fool artificial intelligence like this are called adversarial examples. Jessy Lin, a senior double-majoring in computer science and electrical engineering and in philosophy, believes that they're a serious problem, with the potential to trip up AI systems involved in driverless cars, facial recognition, or other applications. She and several other MIT students have formed a research group called LabSix, which creates examples of these AI adversaries in real-world settings -- such as the turtle identified as a rifle -- to show that they are legitimate concerns.


The five technical challenges Cerebras overcame in building the first trillion-transistor chip – TechCrunch

#artificialintelligence

Superlatives abound at Cerebras, the until-today stealthy next-generation silicon chip company looking to make training a deep learning model as quick as buying toothpaste from Amazon. Launching after almost three years of quiet development, Cerebras introduced its new chip today -- and it is a doozy. The "Wafer Scale Engine" is 1.2 trillion transistors (the most ever), 46,225 square millimeters (the largest ever), and includes 18 gigabytes of on-chip memory (the most of any chip on the market today) and 400,000 processing cores (guess the superlative). Cerebras' Wafer Scale Engine is larger than a typical Mac keyboard (via Cerebras Systems). It's made a big splash here at Stanford University at the Hot Chips conference, one of the silicon industry's big confabs for product introductions and roadmaps, with various levels of oohs and aahs among attendees.


This Huge Computer Chip Could Lead to Big A.I. Advances

#artificialintelligence

Tucked in the Los Altos hills near the Stanford University campus, in a low-slung bunker of offices across from a coffee shop, is a lab overflowing with blinking machines putting circuits through their paces to test for speed, the silicon equivalent of a tool and die shop. Most chips you can balance on the tip of your finger, measuring just a centimeter on a side. Something very different is emerging here. Andrew Feldman, 50, chief executive of startup Cerebras Systems, holds up both hands, bracing between them a shining slab the size of a large mouse pad, an exquisite array of interconnecting lines etched in silicon that shines a deep amber under the dull fluorescent lights. At eight and a half inches on each side, it is the biggest computer chip the world has ever seen.


A brief history of AI and machine learning

#artificialintelligence

AI is transforming the way we live and interact at an accelerating rate. Robotic companions help us as we age, the cities we live in are becoming smarter, and machines are enabling us to better manage chronic diseases. Our four-part video series "The End of the Beginning" highlights some of the technology that is becoming part of our everyday lives. Part one, What's Next for AI, looks at the evolution of the science behind artificial intelligence that builds on a history of slow advances that began in the middle of the 20th century. The origins of artificial intelligence Following developments of machine-based computation during World War II, some early computer scientists decided to take on a challenge much greater than calculating ballistics tables or tabulating census results.


Importance of Loss Function in Machine Learning

#artificialintelligence

Assume you are given a task to fill a bag with 10 Kg of sand. You fill it up till the measuring machine gives you a perfect reading of 10 Kg or you take out the sand if the reading exceeds 10kg. Just like that weighing machine, if your predictions are off, your loss function will output a higher number. As you experiment with your algorithm to try and improve your model, your loss function will tell you if you're getting(or reaching) anywhere. "The function we want to minimize or maximize is called the objective function or criterion. When we are minimizing it, we may also call it the cost function, loss function, or error function" - Source At its core, a loss function is a measure of how good your prediction model does in terms of being able to predict the expected outcome(or value).


What Is Conversational AI? NVIDIA Blog

#artificialintelligence

For a quality conversation between a human and a machine, responses have to be quick, intelligent and natural-sounding. But up to now, developers of language-processing neural networks that power real-time speech applications have faced an unfortunate trade-off: Be quick and you sacrifice the quality of the response; craft an intelligent response and you're too slow. That's because human conversation is incredibly complex. Every statement builds on shared context and previous interactions. From inside jokes to cultural references and wordplay, humans speak in highly nuanced ways without skipping a beat.


Why Trusting AI Means Trusting People

#artificialintelligence

Artificial general intelligence, colloquially known as superintelligence, can be defined as AI improving upon itself through the process of iterative learning, reaching a point of singularity and quickly surpassing the limits of human intelligence. By no means will this assuredly happen, but the likes of Sam Altman and Elon Musk believe it will and are striving for regulations to be set before it does. Regardless of where you stand, AI is a dangerous technology -- and some might go so far as to call it a weapon. To give just a few examples, deepfakes use AI to perform highly realistic face swaps that create the illusion that someone said or did something they never did. Deepfakes can also be used to create fake audio in order to impersonate others, the potential dangers of which, when combined with fake videos, become enormous.