By reverse-engineering signals sent by the brain, researchers at Carnegie Mellon University have been working on an AI that can read complex thoughts simply by looking at brain scans. Using data collected from a functional magnetic resonance imaging (fMRI) machine, the CMU scientists feed that data into their machine learning algorithms, which then locate the building blocks that the brain uses to create complex thoughts. It's by understanding these triggers that the algorithm can use the brain scans to predict what is being thought about at the time, connecting these thoughts into a coherent sentence. Selecting 239 of these complex sentences and feeding the AI the corresponding brain scans, the algorithm managed to successfully predict the correct thoughts with an astounding 87 percent accuracy.
Together, these sensors detect and visualise everything around the truck, including cars, pedestrians and lamp posts. The system works with Caesium, a cloud-based platform (also developed by Oxbotica) that can manage and coordinate fleets of autonomous vehicles. The company sells a "smart platform" which gives other companies access to its delivery infrastructure -- the technology behind its apps, its warehouses and delivery vehicles. So it's very important for us to keep innovating and to keep doing exciting technology projects, because that will give us a competitive advantage going forward."
A Google Photos upgrade arriving this week uses machine learning to suggest pictures based on both your own sharing habits, the people in the photos, and whether or not they're part of a "meaningful moment," such as a party or a wedding. You might not have to remember to share photos of your best friend when you get home from a big weekend shindig. And while your friends won't need Google Photos to receive suggested shares, the shared library clearly depends on everyone signing up. You won't always have to remember to share photos when you get home -- a machine will do much of the work for you.
For Imaginary People, Tyka sought to use generative neural networks to create original portraits, much like the one Alexander Reben used to mimic Bob Ross' speaking style. If you want a generative model like a GAN to, say, draw you a picture of a cat, you'll first have to get a huge data set of cat pictures and then train the model to create a picture of a cat with all the requisite features like ears, whiskers and a tail. So while the first network (the generator) creates pictures of cats, the discriminator's job is to compare those generated images against real-world samples (e.g., actual pictures of cats) and figure out if they're fake or not. Based on each result, the system then goes back and tweaks the generator network's parameters to make the output image appear more and more realistic.
But engineers at Stanford may have made a breakthrough: They've designed a robotic gripper based on gecko's feet that works in zero-g. The problem with existing technology is that everything is designed to work at Earth's gravity, within Earth's temperature range. Geckos can climb up walls and other vertical surfaces because of microscopic flaps on their feet that create an adhesive force. By modeling their technology on these flaps, the team was able to create a gripper that only requires a small push to stick to a surface.
A company called Théoriz from Lyon, France has married both of those things to create a "mixed reality room" that uses projector tech, motion tracking and augmented reality together. That generates digital environments like flying space skulls, a Minecraft-like room with holes that open up on the floor and geometric shapes that interact with actors to form stairs, wells or small hills. In the resulting videos, live actors interact seamlessly with virtual environments, creating a hallucinogenic effect. In the latest demo video (above), actors interact with bizarre geometric environments, opening up holes in the floor where they move and walking up fake stairs.
Fat Shark has been the go-to maker of racing drone goggles for several years, and it's about to double down on digital, which in turn could be the nudge toward dropping analog feeds that the sport needs. It's still not uncommon to see a racing drone held together by tape or cable ties sporting a shoddily 3D-printed GoPro mount, and for the most part, that's fine. We've seen drones like UVify's Draco and Amimon's Falcore try and sex-up racing drones, and introduce digital video features -- but most of the sport hasn't committed to going digital just yet. Seemingly something the company was aware of, so it's dropped the shark logo (kinda), and given itself a visual makeover.
The dating app Hinge has just added a video option to its users' profiles. Now, any of a user's six profile photos can be swapped for a video that will autoplay whenever someone scrolls through their profile. Bumble announced earlier this year that it would be adding Snapchat-like video stories that disappear after 24 hours and Match is also developing a video option that allows for stitched-together videos, photos and voiceover. Hinge's feature is limited to existing videos, meaning you can't shoot a new one directly in the app.
Today, the company introduced its new Sports Performance Platform, an analytics system that aims to help teams track, improve and predict their players performance using machine learning and Surface technology. Microsoft's Sports Performance Platform can, for example, figure out when a player is at risk of injury, based on his or her most recent performance and recovery time. The company says one of the main benefits to its sports analytics tool is that it's powered by proprietary business tools such as Power BI, a cloud-based intelligence suite also used on products like Excel, as well as Azure and, of course, Surface computers. Professional teams such as the Seattle Reign FC (US, National Women's Soccer League) and Real Sociedad (Spain, La Liga) are already taking advantage of the Sports Performance Platform.
Having a mind to update these payphones for the modern age, BT -- which owns the majority of them -- announced last year it had teamed up with the same crew behind New York's LinkNYC free gigabit WiFi kiosks to make that happen. The first of these, installed along London's Camden High Street, have been switched on today, offering the fastest public WiFi around, free phone calls, USB charging, maps, directions and other local info like weather forecasts, Tube service updates and community messages. Like the LinkNYC program, later plans for the UK's next-gen phone boxes include temperature, traffic, air and noise pollution sensors. By comparison, London's starting small with only a handful of cabinets along one major street, but many more are expected to spring up around the capital and in other large UK cities before the year's out.