Please come out, we'd all love to see you.--Andrea Boyczuk She hadn't driven on 75 since before Christmas. There were lots of cars and self-driving trucks on the road, and in MR the sky had sprouted thousands of virtual signs, labels, and guides. It seemed a lot was going on. Eventually the silence made her edgy and she said, "So you're Lake Erie. How long have you been awake?" "I've been a legal person since 2017." The lake had a smooth, masculine voice, with none of the artificiality she'd heard in Mercury's on those occasions when she'd spoken to it directly and not through Donna. "I was made one so that the citizens of Ohio could litigate on my behalf. But I have a lot more resources since I have the actants' network attached to me."
Summary: Some industries are a clear slam-dunk for AI/ML applications and some less so. The legal, regulatory, and compliance businesses (law firms, internal legal departments, and the contract review and regulatory compliance departments of heavily regulated industries) fall in this last category. This is a review of seven companies found by TopBots to be successful; pointing to opportunities others can follow. Remember just a few years ago when we were looking forward to now or a little beyond and imagining what applications AI/ML would have in different industries. Some of those prognostications were slam dunks as they applied to customer propensity or using machine vision to count whatever widgets you were interested in.
Police and border guards must combat racial profiling and ensure that their use of "big data" collected via artificial intelligence does not reinforce biases against minorities, United Nations experts said on Thursday. Companies that sell algorithmic profiling systems to public entities and private companies, often used in screening job applicants, must be regulated to prevent misuse of personal data that perpetuates prejudices, they said. "It's a rapidly developing technological means used by law enforcement to determine, using big data, who is likely to do what. And that's the danger of it," Verene Shepherd, a member of the UN Committee on the Elimination of Racial Discrimination, told Reuters. "We've heard about companies using these algorithmic methods to discriminate on the basis of skin colour," she added, speaking from Jamaica.
Empowering algorithms to make potentially life-changing decisions about citizens still comes with significant risk of unfair discrimination, according to a new report published by the UK's Center for Data Ethics and Innovation (CDEI). In some sectors, the need to provide adequate resources to make sure that AI systems are unbiased is becoming particularly pressing – namely, the public sector, and specifically, policing. The CDEI spent two years investigating the use of algorithms in both the private and the public sector, and was faced with many different levels of maturity in dealing with the risks posed by algorithms. In the financial sector, for example, there seems to be much closer regulation of the use of data for decision-making; while local government is still in the early days of managing the issue. What is AI? Everything you need to know about Artificial Intelligence Although awareness of the threats that AI might pose is growing across all industries, the report found that there is no particular example of good practice when it comes to building responsible algorithms.
AI Policy Matters is a regular column in the ACM SIGAI AI Matters newsletter featuring summaries and commentary based on postings that appear twice a month in the AI Matters blog. Confusion in the popular media about terms such as algorithm and what constitutes AI technology cause critical misunderstandings among the public and policymakers. More importantly, the role of data is often ignored in ethical and operational considerations. Even if AI systems are perfectly built, low quality and biased data cause unintentional and even intentional hazards. A generative pre-trained transformer GPT-3 is currently in the news.
Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. A Florida woman was arrested for allegedly setting up a dating profile advertising "free meth tonight" that sent suitors to her romantic rival's home looking for sex, police said. Vanessa Marie Huckaba, 29, created the "Islandbabe1234" profile on the website Seeking Arrangement and included the name, photo, cellphone number and address of a woman who was dating her ex-boyfriend, according to an arrest report obtained by The Miami Herald. FLORIDA WOMAN CHARGED IN MACHETE ATTACK WANTED TO BE WITH VICTIM'S WIFE: DEPUTIES "Multiple strangers began arriving at the victim's residence thereafter," said Adam Linhardt, a spokesman for the Monroe County Sheriff's Office.
In August 2016, a Bloomberg report revealed a secret aerial surveillance program in Baltimore led by the city's police department. Over eight months, planes equipped with cameras collected over 300 hours of footage, used by the police to investigate alleged crimes. Hardly anyone outside police department leadership and the vendor, Persistent Surveillance Systems, knew. Baltimore's police commissioner at the time, Kevin Davis, defended both the planes and the secrecy. The city's murder rate was spiking, the stretched police department was responding to thousands of calls per day, and footage from the planes was helping police find suspects.
Can computers read and apply legal rules? It's an idea that's gaining momentum, as it promises to make laws more accessible to the public and easier to follow. But it raises a host of legal, technical and ethical questions. The OECD recently published a white paper on "Rules as Code" efforts around the world. The Australian Senate Select Committee on Financial Technology and Regulatory Technology will be accepting submissions on the subject until 11 December 2020.
Governments need an abrupt change of direction to avoid "stumbling zombielike into a digital welfare dystopia," Philip G. Alston, a human rights expert reporting on poverty, told the United Nations General Assembly last year, in a report calling for the regulation of digital technologies, including artificial intelligence, to ensure compliance with human rights. The private companies that play an increasingly dominant role in social welfare delivery, he noted, "operate in a virtually human-rights-free zone." Last month, the U.N. expert monitoring contemporary forms of racism flagged concerns that "governments and nonstate actors are developing and deploying emerging digital technologies in ways that are uniquely experimental, dangerous, and discriminatory in the border and immigration enforcement context." The European Border and Coast Guard Agency, also called Frontex, has tested unpiloted military-grade drones in the Mediterranean and Aegean for the surveillance and interdiction of vessels of migrants and refugees trying to reach Europe, the expert, E. Tendayi Achiume, reported. The U.N. antiracism panel, which is charged with monitoring and holding states to account for their compliance with the international convention on eliminating racial discrimination, said states must legislate measures combating racial bias and create independent mechanisms for handling complaints.
See also our related columns The Turning Point, Techie Tuesdays, and Storybites. Though artificial intelligence (AI) may not surpass human intelligence for at least a few more decades, it opens up opportunities and challenges that we must address today in order to shape a better world for us all. A call to action for business leaders, entrepreneurs, academics, and policymakers is effectively made in Toby Walsh's new book, 2062: The World that AI Made. The rise of AI poses serious philosophical, economic and social questions for all of us, and more vision and collaboration are urgently called for. How many jobs will AI take away or create?