MIT develops algorithm that can 'de-bias' facial recognition software

Daily Mail - Science & tech

MIT researchers believe they've figured out a way to keep facial recognition software from being biased. To do this, they developed an algorithm that knows to scan for faces, but also evaluates the training data supplied to it. The algorithm scans for biases in the training data and eliminates any that it perceives, resulting in a more balanced dataset. MIT researchers believe they've figured out a way to keep facial recognition software from being biased. They developed an algorithm that's capable of balancing training data'We've learned in recent years that AI systems can be unfair, which is dangerous when they're increasingly being used to do everything from predict crime to determine what news we consume,' MIT's Computer Science & Artificial Intelligence Laboratory said in a statement.


Center for 'socialist journalism' opens in China as part of Xi push to cow media

The Japan Times

Amid Chinese President Xi Jinping's moves to bring the media to heel, a "teaching and research center for socialist journalism with Chinese characteristics" opened in Beijing on Sunday, state media reported. The new center, a joint project between Tsinghua University and Fudan University, will likely be used to follow through in implementing orders handed down by Xi in February for news media run by the Communist Party and the government to toe the party line, focusing on what authorities have called "positive reporting." "We should develop journalism in China with a thorough understanding of the good aspects of journalism in other countries, so that wrong or harmful content can be identified," said Tong Bing, a professor at Fudan University. China's state-run media organizations have long been known as Communist Party mouthpieces, but recent moves by Xi have seen the party further cement its grip. In February, Xi toured state media outlets, urging them to play a role in "properly guiding public opinion," part of a ramped-up push by the Chinese president to consolidate the party's grip on power amid growing economic malaise.


Semantic Kernel Forests from Multiple Taxonomies

Neural Information Processing Systems

When learning features for complex visual recognition problems, labeled image exemplars alone can be insufficient. While an \emph{object taxonomy} specifying the categories' semantic relationships could bolster the learning process, not all relationships are relevant to a given visual classification task, nor does a single taxonomy capture all ties that \emph{are} relevant. In light of these issues, we propose a discriminative feature learning approach that leverages \emph{multiple} hierarchical taxonomies representing different semantic views of the object categories (e.g., for animal classes, one taxonomy could reflect their phylogenic ties, while another could reflect their habitats). For each taxonomy, we first learn a tree of semantic kernels, where each node has a Mahalanobis kernel optimized to distinguish between the classes in its children nodes. Then, using the resulting \emph{semantic kernel forest}, we learn class-specific kernel combinations to select only those relationships relevant to recognize each object class. To learn the weights, we introduce a novel hierarchical regularization term that further exploits the taxonomies' structure. We demonstrate our method on challenging object recognition datasets, and show that interleaving multiple taxonomic views yields significant accuracy improvements.


Automatically Creating Multilingual Lexical Resources

AAAI Conferences

The thesis proposes creating bilingual dictionaries andWordnets for languages without many lexical resourcesusing resources of resource-rich languages. Our workwill have the advantage of creating lexical resources,reducing time and cost and at the same time improvingthe quality of resources created.


Automatic Wordnet Development for Low-Resource Languages using Cross-Lingual WSD

Journal of Artificial Intelligence Research

Wordnets are an effective resource for natural language processing and information retrieval, especially for semantic processing and meaning related tasks. So far, wordnets have been constructed for many languages. However, the automatic development of wordnets forlow-resource languages has not been well studied. In this paper, an Expectation-Maximization algorithm is used to create high quality and large scale wordnets for poorresource languages.The proposed method benefits from possessing cross-lingual word sense disambiguation and develops a wordnet by only using a bilingual dictionary and a monolingual corpus.The proposed method has been executed with Persian language and the resulting wordnet has been evaluated through several experiments. The results show that the induced wordnet has a precision score of 90% and a recall score of 35%.