Computer systems predict objects' responses to physical forces

MIT News

Josh Tenenbaum, a professor of brain and cognitive sciences at MIT, directs research on the development of intelligence at the Center for Brains, Minds, and Machines, a multiuniversity, multidisciplinary project based at MIT that seeks to explain and replicate human intelligence. Presenting their work at this year's Conference on Neural Information Processing Systems, Tenenbaum and one of his students, Jiajun Wu, are co-authors on four papers that examine the fundamental cognitive abilities that an intelligent agent requires to navigate the world: discerning distinct objects and inferring how they respond to physical forces. By building computer systems that begin to approximate these capacities, the researchers believe they can help answer questions about what information-processing resources human beings use at what stages of development. Along the way, the researchers might also generate some insights useful for robotic vision systems. "The common theme here is really learning to perceive physics," Tenenbaum says.

NIPS 2017 -- Day 2 Highlights – Insight Data


We are back with some highlights from the second day of NIPS. A lot of fascinating research was showcased today, and we are excited to share some of our favorites with you. If you missed them, feel free to check our Day 1 and Day 3 Highlights! One of the most memorable sessions of the first two days was today's invited talk by Kate Crawford, about bias in Machine Learning. We recommend taking a look at the feature image of this post, representing modern Machine Learning datasets as an attempt at creating a taxonomy of the world.

NIPS 2017 -- Day 3 Highlights – Insight Data


Pieter started his invited talk by summarizing some of the key differences between supervised learning and Reinforcement Learning (RL). In essence, RL is mainly concerned with learning an effective policy to have an agent interact with the world in a way that best achieves a goal. For example, learning a policy on how to walk. Recently, RL has seen many success stories, such as learning to play Atari games from the raw pixel inputs, mastering the game of Go to a superhuman level, or effectively teaching simulated characters how to walk from scratch. However, one big gap between RL algorithms and humans, remains the time it takes to acquire new and effective policies.

Deploying a machine learning model as an API with Datmo, Falcon, Gunicorn, and Python


First we'll need to write a function that can take an unclassified entry and perform a prediction on it. To do this, the script will need to rebuild the model in memory based on the pickle file (model.dat, in this case), and feed it a new entry to allow it to make a prediction. While it's possible to retrain a model from scratch each time we want to make a prediction, this is incredibly resource intensive (especially in larger examples) and is a fundamentally different process from making a standalone inference, and as such, is very bad practice in machine learning. I've written a predict function within a new file,, For this prediction, the model requires 4 numerical inputs (sepal_length, sepal_width, petal_length, petal_width -- in this order) and returns a class prediction containing one of three species (Iris-setosa, Iris-versicolor, Iris-virginica).

Create Twitter WordCloud with just 40 lines of RCode


Guess whose twitter handle gives this word cloud? You are right, that is Andrew Ng tweeting about his new Deep Learning course on Coursera! It's always fun to see data in action; isn't it? Let's try and create a similar wordcloud for three world leaders, viz. A wordcloud is a data visualisation technique in which the size of each word indicates its frequency or importance in the associated text (i.e., the more times a word appears in the corpus, the bigger the word) Since you are interested in creating a wordcloud from twitter handles using R, I will safely assume you have both, a Twitter account to your name, and RStudio installed on your machine.

Making R Code Faster : A Case Study


I had a working, short script that took 3 1/2 minutes to run. While this may be fine if you only need to run it once, I needed to run it hundreds of time for simulations. My first attempt to do so ended about four hours after I started the code, with 400 simulations left to go, and I knew I needed to get some help. This post documents the iterative process of improving the performance of the function, culminating in a runtime of .64 seconds for 10,000 iterations, a speed-up of more than 100,000x. At Etsy I work a lot on our A/B Testing system.

How Financial Analysts can leverage web data extraction -


Financial analysts have long relied on traditional sources of data, such as company filings, to track trends in the stock market and investments. But, leveraging web data extraction of non-traditional sources can put you ahead of the trends in the sectors you manage. While anyone can leverage readily-available information on the web, it's tedious to research and sift through the information to turn the data into something useable. Data extraction of non-traditional web sources can help you quickly gather large amounts of data to help refine equity and finance research and recommendations. If you're looking for new ways to improve the efficiency of your role as a financial analyst, here's how to leverage web data extraction for success.

NIPS 2017 -- Day 1 Highlights – Insight Data


This talk gave a solid overview of the current state and recent advances in Deep Learning. Convolutional Neural Networks (CNN) and autoregressive models are starting to see ubiquitous use in production, showing a fast transition from research to industry. These models have taught us that introducing inductive biases such as translation invariance (CNN) or time recurrence (Recurrent Neural Networks) can be extremely useful. We've also found out that simple "tricks" such as Residual Networks or Attention can lead to tremendous leaps in performance. There are good reasons to believe we will find more such "tricks".

Wearable Tech Trends for 2017 - Amyx Internet of Things (IoT)


In the next few years, expect smart clothing and accessories to become more fashionable and integrate more seamlessly into our daily lives. The wearable tech market is still relatively young and in flux. Fitbit, the company that arguably led the first wave of interest in wearables, didn't start making a wrist-based fitness tracker until 2013. Now, just about every major tech firm – and a slew of scrappy startups – has its own "smart" garment or accessory to peddle, whether in the form of a watch, ring, pendant, sports bra, shoe or something else. By 2020, the global appetite for wearable devices is expected to grow to around $34 billion, with roughly 411 million of the smart devices sold, according to industry analyst firm CCS Insight.


The Japan Times

SYDNEY – High-tech shark-spotting drones are patrolling dozens of Australian beaches this summer to quickly identify underwater predators and deliver safety devices to swimmers and surfers faster than traditional lifesavers. As hundreds of people lined up in the early morning sun to take part in a recent ocean swimming race at Bilgola Beach north of Sydney, they did so in the knowledge the ocean had been scanned to keep them safe. "I think it is really awesome," 20-year-old competitor Ali Smith said. "It is cool to see technology and ocean swimming getting together, and hopefully more people will feel safer and get involved." The drones being used are top notch.