Your message has been sent. There was an error emailing this page. You don't need to live in a smart home to benefit from a Wi-Fi-connected smart speaker. Alexa, Google Assistant, Siri, Cortana, and other digital assistants can help you in dozens of ways, and you don't have to lift a finger to summon them--just speak their names. If you already know you want a smart speaker, scroll down for our top recommendations.
The work of a science writer, including this one, includes reading journal papers filled with specialized technical terminology, and figuring out how to explain their contents in language that readers without a scientific background can understand. Now, a team of scientists at MIT and elsewhere has developed a neural network, a form of artificial intelligence (AI), that can do much the same thing, at least to a limited extent: It can read scientific papers and render a plain-English summary in a sentence or two. Even in this limited form, such a neural network could be useful for helping editors, writers, and scientists scan a large number of papers to get a preliminary sense of what they're about. But the approach the team developed could also find applications in a variety of other areas besides language processing, including machine translation and speech recognition. The work is described in the journal Transactions of the Association for Computational Linguistics, in a paper by Rumen Dangovski and Li Jing, both MIT graduate students; Marin Soljačić, a professor of physics at MIT; Preslav Nakov, a senior scientist at the Qatar Computing Research Institute, HBKU; and Mićo Tatalović, a former Knight Science Journalism fellow at MIT and a former editor at New Scientist magazine.
Science fiction didn't do a great job in preparing us for our first real encounters with AI. Most people probably still envision AI in the form of a sentient robot that can talk, move around, and experience feelings – something like WALL-E or C-3PO from the movies. Although that still may be the dream, it turns out that the current iteration of AI is actually quite different. With modern AI, all the "thinking" gets done in the cloud, and the algorithms aren't tied to the identity of a physical machine like we would have expected from the big screen. The modern iteration of AI works silently in the background without a face, and it's starting to impact everything it touches.
As Notre Dame Cathedral's majestic spire tumbled into the inferno on Monday night, live newsreaders around the world decried the tragic loss of this 12th-century marvel. The great timber roof – nicknamed "the forest" for the thousands of trees used in its beams – was gone, the rose windows feared melted, the heart of Paris destroyed forever. What few realised in the heat of the shocking footage was that much of what was ablaze was a 19th-century fantasy. Like most buildings of this age, Notre Dame is the sum of centuries of restorations and reinventions, a muddled patchwork of myth and speculation. Standing as a sturdy hulk on the banks of the Seine, the great stone pile has never been the most elegant or commanding of the ancient cathedrals, but it became the most famous. Begun in 1163, it was larger than any gothic church before it, employing some of the first flying buttresses to allow taller, thinner walls and larger expanses of glazing – including the spectacular rose windows that projected great cosmic wheels of colour into the luminous interior. "Where would [one] find … such magnificence and perfection, so high, so large, so strong, clothed round about with such a multiple variety of ornaments?"
The long-delayed live-action Minecraft movie has a new release date, so fans might want to make a note in their calendars for March 4th, 2022. With so many delays, it was clear it'd still be a while yet before the film hits theatres, but the 2022 news might come as a disappointment to those who were at one point expecting to see Minecraft next month. Warner Bros. and Microsoft have also revealed some story details. The movie will focus on "a teenage girl and her unlikely group of adventurers. After the malevolent Ender Dragon sets out on a path of destruction, they must save their beautiful, blocky Overworld."
When death finally comes for us, will it announce its presence with a roar? Or, perhaps, with nothing at all -- letting the permanent silence that follows our eventual destruction speak for itself? Boston Dynamics, a company whose main export appears to be unsettling videos of its robotic creations, has offered up one possible answer. Death sounds like 40 robot-dog legs, marching together in unison across a lifeless blacktop parking lot. SEE ALSO: This'blind' robot dog is great for hunting you on a moonless night An April 16 video, embedded above, shows 10 of the company's Spot robots pulling a large truck.
In my latest weekend-project I have been using a Variational Autoencoder to build a feature-based face editor. The model is explained in my youtube video. The feature editing is based on modifying the latent distribution of the VAE. After training of the VAE is completed, the latent space is mapped by encoding the training data once more. Latent space vectors of each feature are determined based on the labels of the training data.
Abstract: Recent advancements in machine learning research, i.e., deep learning, introduced methods that excel conventional algorithms as well as humans in several complex tasks, ranging from detection of objects in images and speech recognition to playing difficult strategic games. However, the current methodology of machine learning research and consequently, implementations of the real-world applications of such algorithms, seems to have a recurring HARKing (Hypothesizing After the Results are Known) issue. In this work, we elaborate on the algorithmic, economic and social reasons and consequences of this phenomenon. Furthermore, a potential future trajectory of machine learning research and development from the perspective of accountable, unbiased, ethical and privacy-aware algorithmic decision making is discussed. We would like to emphasize that with this discussion we neither claim to provide an exhaustive argumentation nor blame any specific institution or individual on the raised issues.
Abstract: We show how to teach machines to paint like human painters, who can use a few strokes to create fantastic paintings. By combining the neural renderer and model-based Deep Reinforcement Learning (DRL), our agent can decompose texture-rich images into strokes and make long-term plans. For each stroke, the agent directly determines the position and color of the stroke. Excellent visual effect can be achieved using hundreds of strokes. The training process does not require experience of human painting or stroke tracking data.