If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Your message has been sent. There was an error emailing this page. Vacuuming is one of the most loathed household chores. While it doesn't come with the ick factor of cleaning the toilet or the tedium of dusting, pushing and dragging a noisy, cumbersome vacuum is its own kind of torture. No wonder most of us only break it out the bare-minimum-recommended once a week.
Tidying guru Marie Kondo believes that if it doesn't spark joy, you should get rid of it. In our AI business, we get little joy out of unnecessary expenses, so we make it a habit to declutter them on a regular basis. Any artificial intelligence (AI) or machine learning (ML) business owner will tell you that AI can be an expensive habit. Left unchecked, the costs of those pet projects, support services and data-storage needs can take over your profit & loss (P&L) statement with all the exuberance of a pile of unread New Yorkers. This is especially true as your business starts to rapidly scale, and the result can leave you sitting less than pretty.
Computers perform better when they receive regular maintenance. Most of us, though, never seem to get around to it in a timely manner. Thankfully, there's CleanMyPC, a handy app that removes the clutter that can slow down your computer and keeps it running at peak performance. Single licenses are on sale for only $19.99, nearly 50% off the usual price. CleanMyPC handles all your computer maintenance dirty work for you.
Contrast enhanced ultrasound is a radiation-free imaging modality which uses encapsulated gas microbubbles for improved visualization of the vascular bed deep within the tissue. It has recently been used to enable imaging with unprecedented subwavelength spatial resolution by relying on super-resolution techniques. A typical preprocessing step in super-resolution ultrasound is to separate the microbubble signal from the cluttering tissue signal. This step has a crucial impact on the final image quality. Here, we propose a new approach to clutter removal based on robust principle component analysis (PCA) and deep learning. We begin by modeling the acquired contrast enhanced ultrasound signal as a combination of a low rank and sparse components. This model is used in robust PCA and was previously suggested in the context of ultrasound Doppler processing and dynamic magnetic resonance imaging. We then illustrate that an iterative algorithm based on this model exhibits improved separation of microbubble signal from the tissue signal over commonly practiced methods. Next, we apply the concept of deep unfolding to suggest a deep network architecture tailored to our clutter filtering problem which exhibits improved convergence speed and accuracy with respect to its iterative counterpart. We compare the performance of the suggested deep network on both simulations and in-vivo rat brain scans, with a commonly practiced deep-network architecture and the fast iterative shrinkage algorithm, and show that our architecture exhibits better image quality and contrast.
This paper addresses the mapping problem. Using a conjugate prior form, we derive the exact theoretical batch multi-object posterior density of the map given a set of measurements. The landmarks in the map are modeled as extended objects, and the measurements are described as a Poisson process, conditioned on the map. We use a Poisson process prior on the map and prove that the posterior distribution is a hybrid Poisson, multi-Bernoulli mixture distribution. We devise a Gibbs sampling algorithm to sample from the batch multi-object posterior. The proposed method can handle uncertainties in the data associations and the cardinality of the set of landmarks, and is parallelizable, making it suitable for large-scale problems. The performance of the proposed method is evaluated on synthetic data and is shown to outperform a state-of-the-art method.
It's hard to predict what will happen once technology is loosed upon the hoi polloi. They might love it, or trash it. They might break it and need someone to fix it. They might pooh-pooh and ignore it, even though it's great. This week, we explored the unintended consequences of a bunch of big companies' and engineers' decisions, and their efforts to patch--or embrace--them.
This paper proposes an approach to domain transfer based on a pairwise loss function that helps transfer control policies learned in simulation onto a real robot. We explore the idea in the context of a 'category level' manipulation task where a control policy is learned that enables a robot to perform a mating task involving novel objects. We explore the case where depth images are used as the main form of sensor input. Our experimental results demonstrate that proposed method consistently outperforms baseline methods that train only in simulation or that combine real and simulated data in a naive way.
Everyone, from musicians to school administrators, must get clever at connecting with distracted audiences. And with U.S. adults now consuming 11 hours of media daily, the challenge gets harder as technology and social norms morph. Americans aren't in the moment anymore, and it's a struggle for brands to reach us. On top of being distracted, we're exposed to 4,000 to 10,000 ads daily, creating layers upon layers of clutter for brands to break through. That brings us to the question: Have we become so over-connected that we've become disconnected?
Skilled robotic manipulation benefits from complex synergies between non-prehensile (e.g. pushing) and prehensile (e.g. grasping) actions: pushing can help rearrange cluttered objects to make space for arms and fingers; likewise, grasping can help displace objects to make pushing movements more precise and collision-free. In this work, we demonstrate that it is possible to discover and learn these synergies from scratch through model-free deep reinforcement learning. Our method involves training two fully convolutional networks that map from visual observations to actions: one infers the utility of pushes for a dense pixel-wise sampling of end effector orientations and locations, while the other does the same for grasping. Both networks are trained jointly in a Q-learning framework and are entirely self-supervised by trial and error, where rewards are provided from successful grasps. In this way, our policy learns pushing motions that enable future grasps, while learning grasps that can leverage past pushes. During picking experiments in both simulation and real-world scenarios, we find that our system quickly learns complex behaviors amid challenging cases of clutter, and achieves better grasping success rates and picking efficiencies than baseline alternatives after only a few hours of training. We further demonstrate that our method is capable of generalizing to novel objects. Qualitative results (videos), code, pre-trained models, and simulation environments are available at http://vpg.cs.princeton.edu