When you run a major app, all it takes is one mistake to put countless people at risk. Such is the case with Diksha, a public education app run by India's Ministry of Education that exposed the personal information of around 1 million teachers and millions of students across the country. The data, which included things like full names, email addresses, and phone numbers, was publicly accessible for at least a year and likely longer, potentially exposing those impacted to phishing attacks and other scams. Speaking of cybercrime, the LockBit ransomware gang has long operated under the radar, thanks to its professional operation and choice of targets. But over the past year, a series of missteps and drama have thrust it into the spotlight, potentially threatening its ability to continue operating with impunity.
In a new study, University of Minnesota law professors used ChatGPT AI chatbot to answer graduate exams at four courses in their school. The AI passed all four, but with an average grade of C . The University of Minnesota group noted ChatGPT was good at addressing "basic legal rules" and summaries, but it floundered when trying to pinpoint issues relevant in a case. When faced with business management questions in a different study, the generator was "amazing" with simple operations management and process analysis questions, but it couldn't handle advanced process questions. It even made mistakes with sixth-grade-level math – something other AI authors have struggled with.
Last week DoNotPay(Opens in a new window) CEO Joshua Browder announced that the company's AI chatbot would represent a defendant in a U.S. court(Opens in a new window), marking the first use of artificial intelligence for this purpose. Now the experiment has been cancelled, with Browder stating he's received objections from multiple state bar associations. "Bad news: after receiving threats from State Bar prosecutors, it seems likely they will put me in jail for 6 months if I follow through with bringing a robot lawyer into a physical courtroom," Browder tweeted on Thursday.(Opens in a new window) "DoNotPay is postponing our court case and sticking to consumer rights." The plan had been to use DoNotPay's AI in a speeding case(Opens in a new window) scheduled to be heard on Feb. 22. The chatbot would run on a smartphone, listening to what was being said in court before providing instructions to the anonymous defendant via an earpiece.
Cybersecurity is a critical issue in today's digital age, as cybercriminals continue to find new ways to infiltrate our systems and steal sensitive information. As the threat of cybercrime looms, it's becoming increasingly clear that traditional cybersecurity methods are no longer enough. But there's hope on the horizon: Artificial Intelligence (AI) is revolutionizing the way we think about cybersecurity and defend against cybercrime. One of the biggest benefits of AI in cybersecurity is its ability to detect and respond to threats in real-time. We all know that traditional cybersecurity methods rely on pre-defined rules and signatures to identify and block malicious activity. But as cybercriminals continue to evolve and find new ways to evade detection, it's becoming clear that this approach is no longer enough.
On a recent episode of Dr. Phil, the host spoke with some of Jeffrey Dahmer's victims and showed them an interview he filmed with the father of one of America's most infamous serial killers. A 21-year-old Louisiana man has been sentenced to 45 years in prison after plotting a Jeffrey Dahmer-like scheme to meet men on the gay dating app Grindr and kill them, according to federal officials. Chance Seneca of Lafayette Parish targeted one particular victim, as well as other gay men, through the app in 2020 because of their sexual orientation and gender, the Justice Department said. "The facts of this case are truly shocking, and the defendant's decision to specifically target gay men is a disturbing reminder of the unique prejudices and dangers facing the LGBTQ community today," Assistant Attorney General Kristen Clarke of the Justice Department's Civil Rights Division said in a Wednesday statement. Clarke continued: "The internet should be accessible and safe for all Americans, regardless of their gender or sexual orientation. We will continue to identify and intercept the predators who weaponize online platforms to target LGBTQ victims and carry out acts of violence and hate."
A "robot" lawyer powered by artificial intelligence was set to be the first of its kind to help a defendant fight a traffic ticket in court next month. But the experiment has been scrapped after "State Bar prosecutors" threatened the man behind the company that created the chatbot with prison time. Joshua Browder, CEO of DoNotPay, on Wednesday tweeted that his company "is postponing our court case and sticking to consumer rights." Bad news: after receiving threats from State Bar prosecutors, it seems likely they will put me in jail for 6 months if I follow through with bringing a robot lawyer into a physical courtroom. Browder also said he will not be sending the company's robot lawyer to court.
Kelly Conlon, who was kept from seeing the Rockettes, and Sam Davis, who was barred from attending a Rangers game, speak out against MSG Entertainment and James Dolan for their use of facial recognition on'America's Newsroom.' The latest development to come from Madison Square Garden and CEO James Dolan is one that will likely leave fans very unhappy. Dolan threatened to cancel all alcohol sales at The Garden – he mentioned a Rangers game – as a response to the New York State Liquor Authority, which is currently investigating Dolan regarding his facial recognition technology that has resulted in several bans against lawyers who are suing him. Dolan said it all on Fox 5's "Good Day New York." with Rosanna Scotto. James Dolan, left, and head coach Tom Thibodeau of the New York Knicks attend the NBA Summer League at the Thomas and Mack Center on July 8, 2022, in Las Vegas.
There's plenty of concern that OpenAI's ChatGPT could help students cheat on tests, but just how well would the chatbot fare if you asked it to write a graduate-level exam? It would pass -- if only just. In a newly published study, University of Minnesota law professors had ChatGPT produce answers for graduate exams at four courses in their school. The AI passed all four, but with an average grade of C . In another recent paper, Wharton School of Business professor Christian Terwiesch found that ChatGPT passed a business management exam with a B to B- grade.
Shutterstock, one of the internet's biggest sources of stock photos and illustrations, is now offering its customers the option to generate their own AI images. In October, the company announced a partnership with OpenAI, the creator of the wildly popular and controversial DALL-E AI tool. Now, the results of that deal are in beta testing and available to all paying Shutterstock users. The new platform is available in "every language the site offers," and comes included with customers' existing licensing packages, according to a press statement from the company. And, according to Gizmodo's own test, every text prompt you feed Shutterstock's machine results in four images, ostensibly tailored to your request.
Eileen Guo: It's essentially very low paid workers that are being asked to label images to teach artificial intelligence how to recognize what it is that they're seeing. And so the fact that these images were shared on the internet, was just incredibly surprising, given how incredibly surprising given how sensitive they were. Jennifer: Labeling these images with relevant tags is called data annotation. The process makes it easier for computers to understand and interpret the data in the form of images, text, audio, or video. And it's used in everything from flagging inappropriate content on social media to helping robot vacuums recognize what's around them.