chilling effect
'The chilling effect': how fear of 'nudify' apps and AI deepfakes is keeping Indian women off the internet
A new report has found an increase in AI tools being used to create digitally manipulated images or videos of women in India. A new report has found an increase in AI tools being used to create digitally manipulated images or videos of women in India. 'The chilling effect': how fear of'nudify' apps and AI deepfakes is keeping Indian women off the internet G aatha Sarvaiya would like to post on social media and share her work online. An Indian law graduate in her early 20s, she is in the earliest stages of her career and trying to build a public profile. The problem is, with AI-powered deepfakes on the rise, there is no longer any guarantee that the images she posts will not be distorted into something violating or grotesque.
- North America > United States (0.29)
- Oceania > Australia (0.06)
- Africa > East Africa (0.05)
- (3 more...)
UK watchdog drops competition review of Microsoft's OpenAI partnership
The UK's competition watchdog will not hold a formal investigation into Microsoft's partnership with the startup behind the artificial intelligence chatbot ChatGPT, stating that while the 2.9tn ( 2.3tn) tech company has "material influence" over OpenAI it does not control it. The Competition and Markets Authority (CMA) said Microsoft, OpenAI's biggest financial backer with a 13bn investment, acquired material influence over the San Francisco-based business in 2019 but did not exercise de facto control over it – and therefore did not meet the threshold for an official inquiry. The decision follows expressions of disquiet over the appointment of the former boss of Amazon UK, Doug Gurr, as the CMA's interim chair. The organisation's chief executive, Sarah Cardell, has also said the CMA does not want to create a "chilling effect" on business confidence, amid pressure from the UK government on regulators to produce pro-growth proposals. The CMA's executive director for mergers, Joel Bamford, said: "We have found that there has not been a change of control by Microsoft from material influence to de facto control over OpenAI. Because this change of control has not happened, the partnership in its current form does not qualify for review under the UK's merger control regime."
- Europe > United Kingdom (0.71)
- North America > United States > California > San Francisco County > San Francisco (0.26)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
US financial watchdog urged to investigate NDAs at OpenAI
OpenAI whistleblowers have urged the US financial watchdog to investigate non-disclosure agreements at the startup after claiming the contracts included restrictions such as requiring employees to seek permission before contacting regulators. Non-disclosure agreements (NDAs) typically bar an employee from sharing company information with outside parties but a group of whistleblowers are arguing that OpenAI's agreements could have led to workers being punished for raising concerns about the company to federal authorities. San Francisco-based OpenAI is the developer of the ChatGPT chatbot and a key player in the artificial intelligence boom, which has been accompanied by expressions of concern from experts about the potential dangerous capabilities of the technology. "Given the well-documented potential risks posed by the irresponsible deployment of AI, we urge the Commissioners to immediately approve an investigation into OpenAI's prior NDAs, and to review current efforts apparently being undertaken by the company to ensure full compliance with SEC rules," the letter to Gary Gensler, the chair of the US Securities and Exchange Commission (SEC), said. The letter from whistleblower representatives was sent on 1 July and published by the Washington Post on Saturday after the news organisation obtained it from the office of the US senator Chuck Grassley.
- Government > Regional Government > North America Government > United States Government (1.00)
- Law > Business Law (0.79)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
A.I. Microdirectives Could Soon Be Used for Law Enforcement
All day and every day, you constantly receive highly personalized instructions for how to comply with the law, sent directly by your government and law enforcement. You're told how to cross the street, how fast to drive on the way to work, and what you're allowed to say or do online--if you're in any situation that might have legal implications, you're told exactly what to do, in real time. Imagine that the computer system formulating these personal legal directives at mass scale is so complex that no one can explain how it reasons or works. But if you ignore a directive, the system will know, and it'll be used as evidence in the prosecution that's sure to follow. This future may not be far off--automatic detection of lawbreaking is nothing new.
- Oceania > Australia (0.05)
- North America > United States > New York (0.05)
- North America > United States > Arizona (0.05)
- (2 more...)
- Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.62)
'Chilling effect': Israel's ongoing surveillance of Palestinians
For activist Issa Amro, the latest revelations from human rights group Amnesty International about Israel's ever-growing use of facial recognition technology against Palestinians come as no surprise. My people are suffering from it," he told Al Jazeera from Hebron. On May 2, Amnesty published a report titled Automated Apartheid, detailing the workings of Israel's Red Wolf programme – a facial recognition technology used to track Palestinians since last year that is believed to be linked to similar, earlier programmes known as Blue Wolf and Wolf Pack. The technology has been deployed at checkpoints in the city of Hebron and other parts of the occupied West Bank – scanning the faces of Palestinians and comparing them against existing databases. Palestinians, like anyone else, have the right to live in a world that upholds equality and dignity. Help dismantle Israel's apartheid and call for an end to the supply of facial recognition technologies used in the Occupied Palestinian ...
- Asia > Middle East > Palestine (0.30)
- Asia > India (0.16)
- Asia > Middle East > Israel > Jerusalem District > Jerusalem (0.07)
- (4 more...)
- Law > Civil Rights & Constitutional Law (1.00)
- Government > Regional Government > Asia Government > Middle East Government (0.30)
Draft EU AI Act regulations could have a chilling effect
In-brief New rules drafted by the European Union aimed at regulating AI could prevent developers from releasing open-source models, according to American think tank Brookings. The proposed EU AI Act, yet to be signed into law, states that open source developers have to ensure their AI software is accurate, secure, and be transparent about risk and data use in clear technical documentation. Brookings argues that if a private company were to deploy the public model or use it in a product, and it somehow gets in trouble due to some unforeseen or uncontrollable effects from the model, the company would then probably try to blame the open source developers and sue them. It might force the open source community to think twice about releasing their code, and would, unfortunately, mean the development of AI will be driven by private companies. Proprietary code is difficult to analyse and build upon, meaning innovation will be hampered.
- Government (0.56)
- Law (0.36)
The EU's AI Act could have a chilling effect on open source efforts, experts warn
The nonpartisan think tank Brookings this week published a piece decrying the bloc's regulation of open source AI, arguing it would create legal liability for general-purpose AI systems while simultaneously undermining their development. Under the EU's draft AI Act, open source developers would have to adhere to guidelines for risk management, data governance, technical documentation and transparency, as well as standards of accuracy and cybersecurity. If a company were to deploy an open source AI system that led to some disastrous outcome, the author asserts, it's not inconceivable the company could attempt to deflect responsibility by suing the open source developers on which they built their product. "This could further concentrate power over the future of AI in large technology companies and prevent research that is critical to the public's understanding of AI," Alex Engler, the analyst at Brookings who published the piece, wrote. "In the end, the [E.U.'s] attempt to regulate open-source could create a convoluted set of requirements that endangers open-source AI contributors, likely without improving use of general-purpose AI."
- Government (1.00)
- Information Technology > Security & Privacy (0.70)
- Law > Statutes (0.52)
- Information Technology > Software (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
Researchers warn court ruling could have a chilling effect on adversarial machine learning
A cross-disciplinary team of machine learning, security, policy, and law experts say inconsistent court interpretations of an anti-hacking law have a chilling effect on adversarial machine learning security research and cybersecurity. At question is a portion of the Computer Fraud and Abuse Act (CFAA). A ruling to decide how part of the law is interpreted could shape the future of cybersecurity and adversarial machine learning. If the U.S. Supreme Court takes up an appeal case based on CFAA next year, researchers predict that the court will ultimately choose a narrow definition of the clause related to "exceed authorized access" instead of siding with circuit courts who have taken a broad definition of the law. One circuit court ruling on the subject concluded that a broad view would turn millions of people into unsuspecting criminals.
- North America > United States (0.75)
- North America > Canada > Ontario > Toronto (0.16)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.59)
- Government > Regional Government > North America Government > United States Government (0.40)
How one filmmaker is using artificial intelligence to uncover surveillance of her Muslim community in Chicago
Since she was a kid, Assia Boundaoui knew that she, her family and her neighbors were being watched. It was an open secret in her hometown of Bridgeview, a Chicago suburb home to a large Muslim and Arab population where for decades residents experienced government surveillance, including home visits by FBI agents. Using her training as a journalist and documentary filmmaker, Boudaoui sought out proof beginning in 2014 by interviewing community members and filing Freedom of Information requests for records on Operation Vulgar Betrayal, one of the largest pre-9/11 counterrorism probes conducted domestically in the United States and included the Bridgeview community. She also submitted hundreds of privacy waivers on behalf of people who were surveilled to the Department of Justice, requesting files on individuals who had experienced surveillance. When the FBI responded, ultimately saying it would take years to process 33,000 pages of records on the investigation, Boundaoui sued. In 2017, a federal judge ruled that she was entitled to expedited processing, ordering the FBI to release 3,500 pages from the Vulgar Betrayal file each month and to give priority to the sub files of individuals for whom privacy waivers were filed.
- North America > United States > Illinois > Cook County > Chicago (0.61)
- North America > Puerto Rico (0.04)
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- (2 more...)
Facial recognition: ten reasons you should be worried about the technology
Facial recognition technology is spreading fast. Already widespread in China, software that identifies people by comparing images of their faces against a database of records is now being adopted across much of the rest of the world. It's common among police forces but has also been used at airports, railway stations and shopping centers. The rapid growth of this technology has triggered a much-needed debate. Activists, politicians, academics and even police forces are expressing serious concerns over the impact facial recognition could have on a political culture based on rights and democracy. As someone who researches the future of human rights, I share these concerns.
- North America > United States (0.31)
- Asia > China (0.25)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Government > Immigration & Customs (0.99)
- (2 more...)