Goto

Collaborating Authors

Civil Rights & Constitutional Law


Louisiana Jeffrey Dahmer copycat sentenced for Grindr dating app scheme to kidnap, murder men

FOX News

On a recent episode of Dr. Phil, the host spoke with some of Jeffrey Dahmer's victims and showed them an interview he filmed with the father of one of America's most infamous serial killers. A 21-year-old Louisiana man has been sentenced to 45 years in prison after plotting a Jeffrey Dahmer-like scheme to meet men on the gay dating app Grindr and kill them, according to federal officials. Chance Seneca of Lafayette Parish targeted one particular victim, as well as other gay men, through the app in 2020 because of their sexual orientation and gender, the Justice Department said. "The facts of this case are truly shocking, and the defendant's decision to specifically target gay men is a disturbing reminder of the unique prejudices and dangers facing the LGBTQ community today," Assistant Attorney General Kristen Clarke of the Justice Department's Civil Rights Division said in a Wednesday statement. Clarke continued: "The internet should be accessible and safe for all Americans, regardless of their gender or sexual orientation. We will continue to identify and intercept the predators who weaponize online platforms to target LGBTQ victims and carry out acts of violence and hate."


ChatGPT Isn't the Only Way to Use AI in Education

WIRED

Soon after ChatGPT broke the internet, it sparked an all-too-familiar question for new technologies: What can it do for education? Many feared it would worsen plagiarism and further damage an already decaying humanism in the academy, while others lauded its potential to spark creativity and handle mundane educational tasks. Of course, ChatGPT is just one of many advances in artificial intelligence that have the capacity to alter pedagogical practices. The allure of AI-powered tools to help individuals maximize their understanding of academic subjects (or more effectively prepare for exams) by offering them the right content, in the right way, at the right time for them has spurred new investments from governments and private philanthropies. There is reason to be excited about such tools, especially if they can mitigate barriers to a higher quality or life--like reading proficiency disparities by race, which the NAACP has highlighted as a civil rights issue.


La veille de la cybersécurité

#artificialintelligence

A picture may be worth a thousand words. But what about a picture generated entirely by a machine? That is the question scholars, advocates, and internet users have been considering lately, as art generated by artificial intelligence (AI) has exploded in popularity. Some commentators have asked who regulates this digitally created art and whether the courts can prevent theft of creative ideas and techniques in the process of its generation. Toward the end of last year, popular use of the Lensa AI app, which generates stylized portraits based on users' uploaded selfies, spurred the latest round of controversy over the ethics of AI-generated art.


NY AG wants answers on Madison Square Garden's use of facial recognition against legal opponents

Engadget

New York Attorney General Letitia James has sent a letter to MSG Entertainment, the owner and operator of Madison Square Garden and Radio City Music Hall, asking for information about its use of facial recognition to deny entry to attorneys at firms representing its legal opponents. James's letter warns that the Orwellian policy may violate local, state and federal human rights laws, including those prohibiting retaliation. MSG Entertainment's facial recognition has been identifying and denying entry to lawyers from firms representing clients suing the company -- whether or not those attorneys are directly involved in the cases. The company, led by CEO James Dolan (who also owns the New York Knicks and Rangers), has defended the policy, framing it as an attempt to prevent evidence collection "outside proper litigation discovery channels." However, lawyers have called that rationale "ludicrous," criticizing the ban as a "transparent effort" to punish attorneys for suing them.


4 questions to ask when evaluating AI prototypes for bias • TechCrunch

#artificialintelligence

It's true there has been progress around data protection in the U.S. thanks to the passing of several laws, such as the California Consumer Privacy Act (CCPA), and nonbinding documents, such as the Blueprint for an AI Bill of Rights. Yet, there currently aren't any standard regulations that dictate how technology companies should mitigate AI bias and discrimination. As a result, many companies are falling behind in building ethical, privacy-first tools. Nearly 80% of data scientists in the U.S. are male and 66% are white, which shows an inherent lack of diversity and demographic representation in the development of automated decision-making tools, often leading to skewed data results. Significant improvements in design review processes are needed to ensure technology companies take all people into account when creating and modifying their products.


Data Engineer at Wizeline - Paraguay based Remote

#artificialintelligence

Wizeline is a global digital services company helping mid-size to Fortune 500 companies build, scale, and deliver high-quality digital products and services. We thrive in solving our customer's challenges through human-centered experiences, digital core modernization, and intelligence everywhere (AI/ML and data). We help them succeed in building digital capabilities that bring technology to the core of their business. At Wizeline, we are a team of near 2,000 people spread across 25 countries. We understand that great technology begins with outstanding talent and diversity of thought.


Artificial Intelligence Takes Center Stage at EEOC

#artificialintelligence

The U.S. Equal Employment Opportunity Commission (EEOC) recently released a draft of its new Strategic Enforcement Plan (SEP), outlining its priorities in tackling workplace discrimination over the next four years. The playbook, published in the Federal Register in January, indicates that the agency will be on the lookout for discrimination caused by artificial intelligence (AI) tools. "The EEOC is signaling in its draft SEP that it intends to enforce federal nondiscrimination laws equally, whether the discrimination takes place through traditional recruiting or through the use of modern and automated tools," said Andrew M. Gordon, an attorney with the law firm Hinshaw & Culbertson LLP in Fort Lauderdale, Fla. Over the last decade, AI use in the workplace has skyrocketed. Nearly 1 in 4 organizations uses AI to support HR-related activities, according to a 2022 survey by the Society for Human Resource Management (SHRM).


Council Post: Has Your Talent AI Been Audited?

#artificialintelligence

If you use AI for any form of talent decision-making in your organization and it results in discrimination, whether it is by you or introduced by the AI, you are the one who is liable. When it comes to verifying the ethical nature of AI, this could be just the start of a global ripple effect. AI has the power to do a lot of good, but working on big data comes with risks. I've spent over 20 years as a workforce strategist scaling teams for some of the largest major projects in the world and have witnessed firsthand the impact of not having visibility of the skills and capabilities of my people. I've seen first-hand the amount of potential that was being wasted on our people and our business, which motivated me to develop an independently audited ethical talent AI.


Top 10 technology and ethics stories of 2022

#artificialintelligence

A major focus of Computer Weekly's technology and ethics coverage in 2022 was on working conditions throughout the tech sector, from the issue of forced labour and slavery throughout technology supply chains, to UK Amazon workers staging spontaneous "wildcat" strikes in response to derisory pay rises and warehouse conditions. Other stories in this vein included coverage of accusations that "soft union-busting" tactics were used by app-based food delivery firm Deliveroo to scupper its workers' grassroots organising efforts, and the ongoing court case against five major tech firms for their alleged role in the maiming and deaths of people extracting raw materials in the Democratic Republic of Congo. Artificial intelligence (AI) also featured heavily in Computer Weekly's technology and ethics coverage in 2022, with stories published on the tech sector's lacklustre commitment to "ethical" AI, as well as on the pitfalls and challenges of auditing AI-powered algorithms. Police technology was another major focus of 2022, as policing bodies continue to push ahead with new tech deployments such as live facial recognition (LFR) despite serious concerns about its effectiveness, proportionality and efficacy. Other stories focused on how technology is developed and deployed, and the underlying power dynamics at play.


Artificial intelligence needs regulations that builds public trust in it

#artificialintelligence

To build trust and confidence in the technology, laws should require organisations and governments to use AI in an ethical, safe and responsible manner that protects peoples' privacy. This means companies and the government must be accountable for the decisions their AI systems make. It means AI systems must be transparent and that an organisation can explain how a person's data is being used by the AI system. It means protections must be put in place to help reduce the risk that AI outputs are not biased or discriminatory. It means individuals are notified when AI is used to make a decision that affects their rights. It means there are boundaries on how high-risk AI systems can be used, and it means individuals have an appropriate legal recourse when those boundaries are broken.