Goto

Collaborating Authors

 alkhatib


Alkhatib

AAAI Conferences

The task of transliteration of named entities from one language into another is complicated and considered as one of the challenging tasks in machine translation (MT). To build a well performed transliteration system, we apply well-established techniques based on Hybrid Deep Learning. The system based on convolutional neural network (CNN) followed by Bi-LSTM and CRF. The proposed hybrid mechanism is examined on ANERCorp and Kalimat corpus. The results show that the neural machine translation approach can be employed to build efficient machine transliteration systems achieving state-of-the-art results for Arabic – English language.


Center for Applied Data Ethics suggests treating AI like a bureaucracy

#artificialintelligence

A recent paper from the Center for Applied Data Ethics (CADE) at the University of San Francisco urges AI practitioners to adopt terms from anthropology when reviewing the performance of large machine learning models. The research suggests using this terminology to interrogate and analyze bureaucracy, states, and power structures in order to critically assess the performance of large machine learning models with the potential to harm people. "This paper centers power as one of the factors designers need to identify and struggle with, alongside the ongoing conversations about biases in data and code, to understand why algorithmic systems tend to become inaccurate, absurd, harmful, and oppressive. This paper frames the massive algorithmic systems that harm marginalized groups as functionally similar to massive, sprawling administrative states that James Scott describes in Seeing Like a State," the author wrote. The paper was authored by CADE fellow Ali Alkhatib, with guidance from director Rachel Thomas and CADE fellows Nana Young and Razvan Amironesei. The researchers particularly look to the work of James Scott, who has examined hubris in administrative planning and sociotechnical systems.


The Dark Side of Big Tech's Funding for AI Research

WIRED

Last week, prominent Google artificial intelligence researcher Timnit Gebru said she was fired by the company after managers asked her to retract or withdraw her name from a research paper, and she objected. Google maintains that she resigned, and Alphabet CEO Sundar Pichai said in a company memo on Wednesday that he would investigate what happened. The episode is a pointed reminder of tech companies' influence and power over their field. Big companies pump out influential research papers, fund academic conferences, compete to hire top researchers, and own the data centers required for large-scale AI experiments. A recent study found that the majority of tenure-track faculty at four prominent universities that disclose funding sources had received backing from Big Tech.