A technology initially used to fight traffic fines is now helping refugees with legal claims. When Joshua Browder developed DoNotPay he called it "the world's first robot lawyer". It's a chatbot - a computer program that carries out conversations through texts or vocal commands - and it uses Facebook Messenger to gather information about a case before spitting out advice and legal documents. It was originally designed to help people wiggle out of parking or speeding tickets. But now Browder - a 20-year-old British man currently studying at Stanford University - has adapted his bot to help asylum seekers.
Refugees struggling with asylum applications can now use a chatbot to get free legal aid in the US, Canada and the UK. For example, the best answer for your situation will include a description when the mistreatment started in your home country," Browder said. In order to give free legal aid, DoNotPay relies on Facebook Messenger, which is not automatically end-to-end encrypted, as it is "the most accessible platform and the most appropriate to launch with". "All data is deleted from my server after ten minutes and it is possible to wipe your data from Facebook Messenger," he said, acknowledging that privacy is a "very important issue and it's important to be upfront with users".
The creator of a chatbot which overturned more than 160,000 parking fines and helped vulnerable people apply for emergency housing is now turning the bot to helping refugees claim asylum. The original DoNotPay, created by Stanford student Joshua Browder, describes itself as "the world's first robot lawyer", giving free legal aid to users through a simple-to-use chat interface. The chatbot, using Facebook Messenger, can now help refugees fill in an immigration application in the US and Canada. Those in the UK are told they need to apply in person, and the bot helps fill out an ASF1 form for asylum support.
RotM Artificial Intelligence can predict the outcomes of European Court of Human Rights trials to a high accuracy, according to research published today. It can judge the final result of legal trials based on the information in human rights cases to 79 per cent accuracy. It could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights," said Dr Nikolaos Aletras, lead-author of the research and researcher at the Department of Computer Science at University College London. The software uses natural language processing and machine learning to analyse case information from both sides, Aletras told The Register.
A team of researchers has used an artificial intelligence system to correctly predict the outcome of hundreds of human rights cases. Alertras added that AI could be a "valuable tool" for highlighting cases that are most likely to be violations of the European Convention on Human Rights. The AI was able to trawl through 584 published cases, identify patterns and conclude whether they marked a "violation" or "non-violation" of the law. The chosen cases all related to one of three core articles of the Convention on Human Right: cases involving torture, rights to a fair trial and respect for private life.
In the study, a team of British and American researchers said it had used an AI system to correctly predict the outcomes of hundreds of cases heard at the European Court of Human Rights. The AI, which analyzed 584 English language case texts related to Article 3, 6 and 8 of the European Convention on Human Rights using a machine learning algorithm, came to the same verdict as human judges in 79 percent of the cases. It could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights," lead researcher Nikolaos Aletras, also from UCL, noted in the statement. "It could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights."
An artificial intelligence system has correctly predicted the outcomes of hundreds of cases heard at the European Court of Human Rights, researchers have claimed. But critics said no AI would be able to understand the nuances of a legal case. These were picked both because they represented cases about fundamental rights and because there was a large amount of published data on them. Increasingly law firms are turning to AI to help them wade through vast amounts of legal data.
We like to think we've moved past the workplace sexism of the 1950s, when men were professionals and women were secretaries. But while women have managed to break out of those subservient roles, the genders we assign to artificial-intelligence robots suggests our prejudices haven't made as much progress. After law firm BakerHostetler hired an AI "lawyer" named ROSS, journalist Rose Eveleth noted that the male name was somewhat unusual in the world of AI. I would just like to note that all the assistant AIs are given female names, but the lawyer AI is named Ross. Critics have previously noted that most AI assistants--including Apple's Siri, Google Now, Amazon's Alexa, and Microsoft's Cortana--sound like women.
After law firm BakerHostetler hired an AI "lawyer" named ROSS, journalist Rose Eveleth noted that the male name was somewhat unusual in the world of AI. I would just like to note that all the assistant AIs are given female names, but the lawyer AI is named Ross. Some have attempted to excuse the trend, pointing to research that shows people respond more positively to women's voices. Meanwhile, the AI lawyer is called ROSS, and IBM's advanced AI computer system, which recently beat a human competitor in the ancient board game Go, goes by Watson.