Living in a busy city doesn't increase the chance of getting Covid-19, but overcrowding does, a new study reveals. AC-19, which was withdrawn from Google's app store last year over alleged concerns of government spying, tracks positive cases and deaths by geographic location. After investigating the link between density and virus transmission in the city, the researchers found that'density alone cannot be considered a risk factor'. The experts stress the difference between high urban density – a high number of people inhabiting an urbanised area – and overcrowding. The right figure shows the state of pandemic spread at the city level and the left one depicts the status at the national level.
We encounter artificial intelligence (AI) every day. AI describes computer systems that are able to perform tasks that normally require human intelligence. When you search something on the internet, the top results you see are decided by AI. Any recommendations you get from your favorite shopping or streaming websites will also be based on an AI algorithm. These algorithms use your browser history to find things you might be interested in.
With the proliferation of female robots such as Sophia and the popularity of female virtual assistants such as Siri (Apple), Alexa (Amazon), and Cortana (Microsoft), artificial intelligence seems to have a gender issue. This gender imbalance in AI is a pervasive trend that has drawn sharp criticism in the media (even Unesco warned against the dangers of this practice) because it could reinforce stereotypes about women being objects. But why is femininity injected in artificial intelligent objects? If we want to curb the massive use of female gendering in AI, we need to better understand the deep roots of this phenomenon. In an article published in the journal Psychology & Marketing, we argue that research on what makes people human can provide a new perspective into why feminization is systematically used in AI.
Substituting a cup of cocoa throughout the day for other snacks could help obese people lose weight – even if they're on a high-fat diet, a new study claims. In lab experiments, US researchers gave obese mice with liver disease a dietary supplement of cocoa powder, for a period of eight weeks. Even though the mice were on a high-fat diet, the experts found the supplement reduced DNA damage and the amount of fat in their livers. While there is more to learn about the health benefits of cocoa, the researchers believe it may in some way impede the digestion of dietary fat and carbohydrate, thereby avoiding weight gain. Supplementation of cocoa powder in the diet of high-fat-fed mice with liver disease markedly reduced the severity of their condition, according to a new study.
This work exploits a large source domain for pretraining and transfer the diversity information from source to target. Highlights: Anchor-based strategy for realism over regions in latent space A novel cross-domain distance consistency loss Existing models can be leveraged to model new distributions with less data Extensive results demonstrates qualitatively and quantitatively that this few-shot model automatically discovers correspondences between source and target domains and generates more diverse and realistic images than previous methods.
Most of us think of tears as a human phenomenon, part of the complex fabric of human emotion. But they're not just for crying: All vertebrates, even reptiles and birds, have tears, which are critical for maintaining healthy eyesight. Now, a new study, published this week in the journal Frontiers in Veterinary Science, reveals that non-human animals' tears are not so different from our own. The chemical similarities are so great, in fact, that the composition of other species' tears--and how they're adapted to their environments--may provide insights into better treatments for human eye disease. Previously, scientists had studied closely only the tears of a handful of mammals, including humans, dogs, horses, camels, and monkeys.
An analysis of electronic health records for 1.7 million Wisconsin patients revealed a variety of health problems newly associated with fragile X syndrome, the most common inherited cause of intellectual disability and autism, and may help identify cases years in advance of the typical clinical diagnosis. Researchers from the Waisman Center at the University of Wisconsin–Madison found that people with fragile X are more likely than the general population to also have diagnoses for a variety of circulatory, digestive, metabolic, respiratory, and genital and urinary disorders. Their study, published recently in the journal Genetics in Medicine, the official journal of the American College of Medical Genetics and Genomics, shows that machine learning algorithms may help identify undiagnosed cases of fragile X syndrome based on diagnoses of other physical and mental impairments. "Machine learning is providing new opportunities to look at huge amounts of data," says lead author Arezoo Movaghar, a postdoctoral fellow at the Waisman Center. "There's no way that we can look at 2 million records and just go through them one by one. We need those tools to help us to learn from what is in the data."
The digital revolution is built on a foundation of invisible 1s and 0s called bits. As decades pass, and more and more of the world's information and knowledge morph into streams of 1s and 0s, the notion that computers prefer to "speak" in binary numbers is rarely questioned. According to new research from Columbia Engineering, this could be about to change. A new study from Mechanical Engineering Professor Hod Lipson and his PhD student Boyuan Chen proves that artificial intelligence systems might actually reach higher levels of performance if they are programmed with sound files of human language rather than with numerical data labels. The researchers discovered that in a side-by-side comparison, a neural network whose "training labels" consisted of sound files reached higher levels of performance in identifying objects in images, compared to another network that had been programmed in a more traditional manner, using simple binary inputs.
According to a new study in the journal Nature Materials, researchers from Stanford University have harnessed the power of machine learning technology to reverse long-held suppositions about the way lithium-ion batteries charge and discharge, providing engineers with a new list of criteria for making longer-lasting battery cells. This is the first time machine learning has been coupled with knowledge obtained from experiments and physics equations to uncover and describe how lithium-ion batteries degrade over their lifetime. Machine learning accelerates analyses by finding patterns in large amounts of data. In this instance, researchers taught the machine to study the physics of a battery failure mechanism to design superior and safer fast-charging battery packs. Fast charging can be stressful and harmful to lithium-ion batteries, and resolving this problem is vital to the fight against climate change.