'Gutfeld!' panel reacts to CNN's suspension of Chris Cuomo after texts reveal the lengths he went to aid his brother Andrew Cuomo amid sex scandal This is a rush transcript from "Gutfeld!," This copy may not be in its final form and may be updated. So, all is not well at CNN. Yes, there is more friction in the fake news factory than there is between Stelter's thighs, while wearing his favorite pair of Lulu lemons. I speak of the network home of hysterics hall monitors in one anchor who would make a great well anchor. As you know, Chris Cuomo is in more hot water than a package of ramen noodles. He just got suspended indefinitely. According to the New York Attorney General's Office, Chris was far more involved in his brother's damage control efforts than previously admitted. Fake news, CNN is totally fake. Now, as you know, Andrew Cuomo, the ex- governor was accused of sexual harassment multiple times. The guy touched more women than Pete Davidson at a wrap party. Chris admitted to helping his brother out in fighting the accusations, and who wouldn't help his brother really. But new documents reveal he was in regular touch with his bros' former top aide and his accusations piled up, Chris demanded knowing when damaging articles would come out, promising he'd uses media connections to help his sleazy sibling. So, this is turning into the best lifetime movie I've ever seen. And I've seen them all, including the 12 men of Christmas. Now, previously, Chris said he never made calls to the press about his brother. And why shouldn't we believe him? He's been so honest before. A little sweaty, just worked out happens. This is where I've been dreaming of. Now, to pull that off, you need a blind spot the size of Wendy Williams's feet. TYRUS, FOX NEWS CHANNEL CONTRIBUTOR (voice-over): Nice, that was good. TYRUS: That was -- GUTFELD: But it seems like Chris was indeed gathering Intel, including dirt on one accuser.
Results released June 16, 2021 – Pew Research Center and Elon University's Imagining the Internet Center asked experts where they thought efforts aimed at ethical artificial intelligence design would stand in the year 2030. Some 602 technology innovators, developers, business and policy leaders, researchers and activists responded to this specific question. The Question – Regarding the application of AI Ethics by 2030: In recent years, there have been scores of convenings and even more papers generated proposing ethical frameworks for the application of artificial intelligence (AI). They cover a host of issues including transparency, justice and fairness, privacy, freedom and human autonomy, beneficence and non-maleficence, freedom, trust, sustainability and dignity. Our questions here seek your predictions about the possibilities for such efforts. By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public ...
Artificial intelligence tools in hiring have so far remained unregulated by U.S. civil rights agencies, despite growing use and potential discrimination risks. One EEOC official wants that to change. "What is unfair is if there are enforcement actions or litigation, both from the government and from the private sector, against those who are using the technologies, and the federal agency responsible for administering the laws has said nothing," Keith Sonderling, a Republican commissioner on the U.S. Equal Employment Opportunity Commission, told Bloomberg Law in an exclusive interview. The use of artificial intelligence for recruitment, resume screening, automated video interviews, and other employment tasks has for years been on the radar of federal regulators and lawmakers, as workers began filing allegations of AI-related discrimination to the EEOC. Attorneys have warned that bias litigation could soon be on the horizon.
This report from the Montreal AI Ethics Institute covers the most salient progress in research and reporting over the second quarter of 2021 in the field of AI ethics with a special emphasis on "Environment and AI", "Creativity and AI", and "Geopolitics and AI." The report also features an exclusive piece titled "Critical Race Quantum Computer" that applies ideas from quantum physics to explain the complexities of human characteristics and how they can and should shape our interactions with each other. The report also features special contributions on the subject of pedagogy in AI ethics, sociology and AI ethics, and organizational challenges to implementing AI ethics in practice. Given MAIEI's mission to highlight scholars from around the world working on AI ethics issues, the report also features two spotlights sharing the work of scholars operating in Singapore and Mexico helping to shape policy measures as they relate to the responsible use of technology. The report also has an extensive section covering the gamut of issues when it comes to the societal impacts of AI covering areas of bias, privacy, transparency, accountability, fairness, interpretability, disinformation, policymaking, law, regulations, and moral philosophy.
'Gutfeld!' panel debates whether CNN will change their coverage This is a rush transcript from "Gutfeld!," This copy may not be in its final form and may be updated. I want to protect free speech. No, we want people to be protected from disinformation, to be protected from dying in this country, to be protected from people like Donald Trump who spread this information for -- who love to make sure that the division and the death continues. That was a rough weekend, and not just for Kat. But at least she kept her clothes on unlike our other guests, Jimmy Failla. But it was a far worse weekend for CNN. First let's go to our roly-poly guacamole gossip goalie. See how bad it got unreliable fart noises. Here's Michael Wolff delivering that smack to the hack. You know, you become part of -- one of the parts of the problem of the media. You know, you come on here and you -- and you have a, you know, a monopoly on truth. You know, you know exactly how things are supposed to be done. You know, you are why one of the reasons people can't stand the media. You should see the rest of the world, buddy. Can I hear that chuckle again? But if that was a heavyweight fight, and it is because, you know, Stelter, it would have been stopped in the first 25 seconds. It got worse, meaning better, lots better. STELTER: It's -- how -- so what should I do differently, Michael? WOLFF: You know, don't talk so much. Listen more, you know, people have genuine problems with the media. The media doesn't get the story right.
The world of work is changing radically. Each of us, to a greater or lesser extent, has to come to terms with new forms of interaction, business and work flows. New actors have appeared on the scene, clad not in flesh and blood, but in circuits and transistors: Artificial Intelligence systems, which are increasingly present in the management of a company's personnel. The recent work, elaborated by the Global Legal Group Ltd. of London, entitled "AI, automatic learning & Big Data -- Third Edition", in which various situations are analyzed, in which this new cybernetic actor enters by force into the global market, appears very interesting. As stated in the introduction, more and more employers are relying on these automated systems to decide on recruitment, select curricula, issue disciplinary measures or make dismissals!
Ryan Calo is the Lane Powell and D. Wayne Gittinger Professor at the University of Washington School of Law. He is a founding co-director of the interdisciplinary UW Tech Policy Lab and the UW Center for an Informed Public. Professor Calo holds adjunct appointments at the University of Washington Information School and the Paul G. Allen School of Computer Science and Engineering. The following is a lightly edited transcript of a discussion that took place shortly after the publication of the European Commission's proposed new regulation of artificial intelligence (AI). The European Commission has today released a proposed regulation around AI. This is obviously something that you have been prepared for and waiting to see happen. What did the EU put out? Years ago I wrote a primer and roadmap for AI policy and also hosted the inaugural Obama White House workshop on artificial intelligence policy. Many of the themes of that essay and of the workshop were reflected in the EU proposal, which is to say that they're not limiting themselves to decision-making by AI. Their approach is to look at the impacts of AI holistically, and to tackle everything from liability should there be harm, to additional obligations for high-risk uses, to facial recognition and biometrics.
It's been two weeks since Google fired Timnit Gebru, a decision that still seems incomprehensible. Gebru is one of the most highly regarded AI ethics researchers in the world, a pioneer whose work has highlighted the ways tech fails marginalized communities when it comes to facial recognition and more recently large language models. Of course, this incident didn't happen in a vacuum. Case in point: Gebru was fired the same day the National Labor Review Board (NLRB) filed a complaint against Google for illegally spying on employees and the retaliatory firing of employees interested in unionizing. Gebru's dismissal also calls into question issues of corporate influence in research, demonstrates the shortcomings of self-regulation, and highlights the poor treatment of Black people and women in tech in a year when Black Lives Matter sparked the largest protest movement in U.S. history. In an interview with VentureBeat last week, Gebru called the way she was fired disrespectful and described a companywide memo sent by CEO Sundar Pichai as "dehumanizing." To delve further into possible outcomes following Google's AI ethics meltdown, VentureBeat spoke with five experts in the field about Gebru's dismissal and the issues it raises.
Fox News contributor Karl Rove reacts to Trump blasting the media and Big Tech for being'massively corrupt.' WASHINGTON – Congressman-elect Jay Obernolte, a 50-year-old who is a video game developer by trade, will be a bit of an outlier in Congress. That's because members of Congress are not necessarily known as a technologically savvy bunch. This reputation has been earned by many awkward moments and stumbles by members when discussing tech, including in a 2018 hearing when Rep. Steve Cohen, D-Tenn., told Alphabet CEO Sundar Pichai, "I use your apparatus often," referring to Google, the search engine. But Obernolte – whose company FarSight Studios creates games for a variety of platforms ranging from PlayStation to iOS – said that, with the right approach, Congress can and should effectively address major tech issues ranging from net neutrality to Section 230. "I actually think that sometimes we get caught up in jargon from a technological standpoint, which is not helpful because I don't think the technology is unapproachable," he told Fox News in an interview.
On Thursday, 26 November, Prof. Andrew Murray, will deliver the Sixth T.M.C. Asser Lecture – 'Almost Human: Law and Human Agency in the Time of Artificial Intelligence'. Asser Institute researcher Dr. Dimitri Van Den Meerssche had the opportunity to speak with professor Murray about his perspective on the challenges posed by Artificial Intelligence to our human agency and autonomy – the backbone of the modern rule of law. A conversation on algorithmic opacity, the peril of dehumanization, the illusionary ideal of the'human in the loop' and the urgent need to go beyond'ethics' in the international regulation of AI. One central observation in your Lecture is how Artificial Intelligence threatens human agency. Could you elaborate on your understanding of human agency and how it is being threatened? In my Lecture I refer to the definition of agency by legal philosopher Joseph Raz. He argues that to be fully in control of one's own agency and decisions you need to have capacity, the availability of options and the freedom to exercise that choice without interference. My claim is that there are four ways in which the adoption and use of algorithms affect our autonomy, and particularly Raz's third requirement: that we are to be free from coercion. First, there is an internal and positive impact. This happens when an algorithm gives us choices, which have been limited by pre-determined values – values that we cannot observe. The second impact is internal and negative. In this scenario, choices are removed because of pre-selected values.