How long will it be until artificial intelligence surpasses that of our own? UQ graduate Matthew Dahlitz explores the issue in his new documentary, featuring scientists from the Queensland Brain Institute. While we may feel bombarded by doomsday predictions about creating robots with human-like intelligence, UQ graduate, neuropsychotherapist and filmmaker Matthew Dahlitz believes there's no need to panic. Dahlitz (Bachelor of Arts (Psychological Science) '94, Master of Counselling '14) has combined his knowledge of the human mind with his passion for the arts to release his first feature-length documentary with son Jachin, through their independent film production and media house, Perfekt Studios. Titled Toward Singularity, the documentary explores how brain science is being used to inform the development of super intelligent computers and features interviews with a number of scientists from UQ's Queensland Brain Institute (QBI).
Speedcubing is the sport of solving a classic Rubik's Cube -- or a related combination puzzle -- in the shortest amount of time possible. And, no, it is not for the faint of heart. The new Netflix documentary on this subject, The Speed Cubers, dives headfirst into the friendly but competitive speedcubing culture. The 40-minute film is one of three new documentary shorts debuting on Netflix this summer. The Speed Cubers centers on a couple of professional competitors who go head-to-head at the World Cube Association World Championship in Melbourne, Australia, in 2019.
Christmas is just around the corner, which means it's time to start planning your presents. Finding the perfect gift for your loved ones can be tricky, but don't worry, TechRadar is here to help you plan ahead. There's nothing like watching the people you care about erupt into smiles as they tear off your wrapping, and are greeted with a gift they actually love. So if you want to leave a lasting impression, the latest tech gadget can do just that. Technology is evolving so quickly that if you decided on a gizmo last year, there's always something new to choose from this year.
With the advent of deep neural networks, some research focuses towards understanding their black-box behavior. In this paper, we propose a new type of self-interpretable models, that are, architectures designed to provide explanations along with their predictions. Our method proceeds in two stages and is trained end-to-end: first, our model builds a low-dimensional binary representation of any input where each feature denotes the presence or absence of concepts. Then, it computes a prediction only based on this binary representation through a simple linear model. This allows an easy interpretation of the model's output in terms of presence of particular concepts in the input. The originality of our approach lies in the fact that concepts are automatically discovered at training time, without the need for additional supervision. Concepts correspond to a set of patterns, built on local low-level features (e.g a part of an image, a word in a sentence), easily identifiable from the other concepts. We experimentally demonstrate the relevance of our approach using classification tasks on two types of data, text and image, by showing its predictive performance and interpretability.
End-to-end neural models have made significant progress in question answering, however recent studies show that these models implicitly assume that the answer and evidence appear close together in a single document. In this work, we propose the Coarse-grain Fine-grain Coattention Network (CFC), a new question answering model that combines information from evidence across multiple documents. The CFC consists of a coarse-grain module that interprets documents with respect to the query then finds a relevant answer, and a fine-grain module which scores each candidate answer by comparing its occurrences across all of the documents with the query. We design these modules using hierarchies of coattention and self-attention, which learn to emphasize different parts of the input. On the Qangaroo WikiHop multi-evidence question answering task, the CFC obtains a new state-of-the-art result of 70.6% on the blind test set, outperforming the previous best by 3% accuracy despite not using pretrained contextual encoders.
Logan Marshall-Green stars in the upcoming ultra-violent sci-fi/horror/action film, Upgrade, directed by Australia's own Leigh Whannell. The film follows Marshall-Green's quadriplegic character Grey, as he's upgraded with a new form of artificial intelligence chip that restores all of his former functions – as well as turning him into a killing machine. He then uses his newfound gifts to seek revenge on the men that killed his wife. Here is another science-fiction premise that may have some actual future plausibility, with a mix of gory violence to spice up the entertainment factor. Judging by this trailer, Whannell looks to have hit the mark.
Its venerable phone line wasn't the only newly minted product Apple showed off at the iPhone 8 event on Tuesday. Eddie Cue announced onstage that the company will expand availability of its TV app to seven new countries by the end of the year and will be adding local news and sports programming as well. The TV app will be available in Australia and Canada next month, the spread to Germany, France, Sweden, Norway and the UK by the end of the year. US sports fans (that is, those that live in the country), will be able to track their favorite teams and have Apple TV push an on-screen notification whenever a game starts. By the end of the year, Apple also announced that users will be able to ask Siri directly to switch to a game.
People get up to weird things in New Zealand. At the University of Auckland, if you want to run hours upon hours of experiments on a baby trapped in a high chair, that's cool. You can even have a conversation with her surprisingly chatty disembodied head. BabyX, the virtual creation of Mark Sagar and his researchers, looks impossibly real. The child, a 3D digital rendering based on images of Sagar's daughter at 18 months, has rosy cheeks, warm eyes, a full head of blond hair, and a soft, sweet voice. When I visited the computer scientist's lab last year, BabyX was stuck inside a computer but could still see me sitting in front of the screen with her "father." To get her attention, we'd call out, "Hi, baby. Look at me, baby," and wave our hands. When her gaze locked onto our faces, we'd hold up a book filled with words (such as "apple" or "ball") and pictures (sheep, clocks), then ask BabyX to read the words and identify the objects.