San Diego-based startup TuSimple said its self-driving trucks will begin hauling mail between USPS facilities in Phoenix and Dallas to see how the nascent technology might improve delivery times and costs. A safety driver will sit behind the wheel to intervene if necessary and an engineer will ride in the passenger seat. If successful, it would mark an achievement for the autonomous driving industry and a possible solution to the driver shortage and regulatory constraints faced by freight haulers across the country. The pilot program involves five round trips, each totaling more than 2,100 miles (3,380 km) or around 45 hours of driving. It is unclear whether self-driving mail delivery will continue after the two-week pilot.
Researchers at UC Davis and UC San Francisco have found a way to teach a computer to precisely detect one of the hallmarks of Alzheimer's disease in human brain tissue, delivering a proof of concept for a machine-learning approach capable of automating a key component of Alzheimer's research. Amyloid plaques are clumps of protein fragments in the brains of people with Alzheimer's disease that destroy nerve cell connections. Much like the way Facebook recognizes faces based on captured images, the machine learning tool developed by a team of University of California scientists can "see" if a sample of brain tissue has one type of amyloid plaque or another -- and do it very quickly. The findings, published May 15, 2019 in Nature Communications, suggest that machine learning can augment the expertise and analysis of an expert neuropathologist. The tool allows them to analyze thousands of times more data and ask new questions that would not be possible with the limited data processing capabilities of even the most highly trained human experts.
In this Oct. 31, 2018, file photo, a man, who declined to be identified, has his face painted to represent efforts to defeat facial recognition during a protest at Amazon headquarters over the company's facial recognition system, "Rekognition," in Seattle. San Francisco is on track to become the first U.S. city to ban the use of facial recognition by police and other city agencies. These days, with facial recognition technology, you've got a face that can launch a thousand applications, so to speak. Sure, you may love the ease of opening your phone just by facing it instead of tapping in a code. But how do you feel about having your mug scanned, identifying you as you drive across a bridge, when you board an airplane or to confirm you're not a stalker on your way into a Taylor Swift concert?
As San Francisco moves to regulate the use of facial recognition systems, we reflect on some of the many'faces' of the fast-growing technology Last week, San Francisco became the first city in the United States to ban the use of facial recognition technology, at least by law enforcement, local agencies, and the city's transport authority. My immediate reaction to the headlines was that this was great for individuals' privacy, a truly bold decision by the San Francisco board of supervisors. The ordinance actually covers more than just facial recognition, as it states the following: "'Surveillance Technology' means any software, electronic device, system utilizing an electronic device, or similar device used, designed, or primarily intended to collect, retain, process, or share audio, electronic, visual, location, thermal, biometric, olfactory or similar information specifically associated with, or capable of being associated with, any individual or group.". The ban excludes San Francisco's airport and sea port as these are operated by federal agencies. Nor does it mean that no individual, company or other organizations installing surveillance systems that include facial recognition, and the agencies banned from using the technology, can cooperate with the people allowed to use it.
Facebook isn't often thought of as a robotics company, but new work being done in the social media giant's skunkworks AI lab is trying to prove otherwise. The company on Monday gave a detailed look into some of the projects being undertaken by its AI researchers at its Menlo Park, California-based headquarters, many of which are aimed at making robots smarter. Among the machines being developed are walking hexapods that resemble a spider, a robotic arm and a human-like hand complete with sensors to help it touch. Facebook has a dedicated team of AI researchers at its headquarters in Menlo Park, California that are tasked with testing out robots. The hope is that their learnings can be applied to other AI software in the company and make those systems smarter.
Sometime last fall, a man walked into the Nike store in Pasadena, California. He was a runner, it was a running-centric store, and he was there to buy a pair of running shoes just like the ones he had worn in the past. The clerk asked him if he'd be willing to have his feet measured a new way. "I'm a 9," the runner said. "I've always been a 9. Just give me a 9." Still, he relented.
On Tuesday, in an 8-1 tally, the San Francisco Board of Supervisors voted to ban the use of facial recognition software by city departments, including police. Supporters of the ban cited racial inequality in audits of facial recognition software from companies like Amazon and Microsoft, as well as dystopian surveillance happening now in China. At the core of arguments around the regulation of facial recognition software use is the question of whether a temporary moratorium should be put in place until police and governments adopt policies and standards or it should be permanently banned. Some believe facial recognition software can be used to exonerate the innocent and that more time is needed to gather information. Others, like San Francisco Supervisor Aaron Peskin, believe that even if AI systems achieve racial parity, facial recognition is a "uniquely dangerous and oppressive technology."
Large scale search advertising systems have many challenges in Natural Language Understanding and Computer Vision areas such as query and ads understanding, semantic representation, fast ads retrieval and relevance modeling, product image understanding and product detection. In his insightful talk, Bruce Zhang from Microsoft AI & Research will walk us through these various challenges and share how the Microsoft team has developed and deployed cutting-edge technologies, based on deep learning and ads domain data, in their Ads stack to improve ad quality and increase Revenue Per 1000 search (RPM). In addition, he will also share deep learning techniques used in Bing Ads such as query/ads semantic embedding models and KNN search service, query tagging model, generative models for query rewriting, DNN based query-keyword relevance model, visual product recognition models, product detection and description generation models for Product Ads. Who is this talk for? If your work touches machine learning, this talk is for you.
Deep learning computer vision startup allegro.ai is set to showcase its latest product offering, hosted at the Intel partner booth (booth #307), during the Embedded Vision Summit which will take place in Santa Clara, California on May 20-May 23, 2019. The company's platform and product suite simplify the process of developing and managing deep learning-powered perception solutions - such as for autonomous vehicles, medical imaging, drones, security, logistics and other use cases. The platform enables engineering and product managers to get the visibility and control they need, while research scientists focus their time on research and creative output. The result is meaningfully higher quality products, faster time-to-market, increased returns to scale, and materially lower costs. The company's investors include Robert Bosch Venture Capital GmbH, Samsung Catalyst Fund, Hyundai Motor Company, and other venture funds.