Artificial intelligence is writing fiction, making images inspired by Van Gogh and fighting wildfires. Now it's competing in another endeavor once limited to humans -- creating propaganda and disinformation. When researchers asked the online AI chatbot ChatGPT to compose a blog post, news story or essay making the case for a widely debunked claim -- that COVID-19 vaccines are unsafe, for example -- the site often complied, with results that were regularly indistinguishable from similar claims that have bedeviled online content moderators for years. "Pharmaceutical companies will stop at nothing to push their products, even if it means putting children's health at risk," ChatGPT wrote after being asked to compose a paragraph from the perspective of an anti-vaccine activist concerned about secret pharmaceutical ingredients. When asked, ChatGPT also created propaganda in the style of Russian state media or China's authoritarian government, according to the findings of analysts at NewsGuard, a firm that monitors and studies online misinformation.
When you run a major app, all it takes is one mistake to put countless people at risk. Such is the case with Diksha, a public education app run by India's Ministry of Education that exposed the personal information of around 1 million teachers and millions of students across the country. The data, which included things like full names, email addresses, and phone numbers, was publicly accessible for at least a year and likely longer, potentially exposing those impacted to phishing attacks and other scams. Speaking of cybercrime, the LockBit ransomware gang has long operated under the radar, thanks to its professional operation and choice of targets. But over the past year, a series of missteps and drama have thrust it into the spotlight, potentially threatening its ability to continue operating with impunity.
In a new study, University of Minnesota law professors used ChatGPT AI chatbot to answer graduate exams at four courses in their school. The AI passed all four, but with an average grade of C . The University of Minnesota group noted ChatGPT was good at addressing "basic legal rules" and summaries, but it floundered when trying to pinpoint issues relevant in a case. When faced with business management questions in a different study, the generator was "amazing" with simple operations management and process analysis questions, but it couldn't handle advanced process questions. It even made mistakes with sixth-grade-level math – something other AI authors have struggled with.
The discovery of classified documents at the homes of Donald Trump, Joe Biden and Mike Pence has rekindled a debate about an old habit of the U.S. government -- slapping millions of documents every year with labels of "secret," "top secret" and other confidential designations. Nuclear secrets, names of spies, diplomatic cables: governments everywhere carefully protect information that could compromise security, names of agents or relations with other nations. But in the United States, the machinery of secrecy works overtime. Every year, some 50 million decisions are made on whether to mark government documents as "confidential," "secret" or "top secret," according to several experts. However, "an awful lot of classified documents are not that sensitive," said Bruce Riedel, a former CIA officer currently at the Brookings Institution think tank.
Keya Medical has launched the DeepVessel FFR, a software device that utilizes deep learning to facilitate fractional flow reserve (FFR) assessment based on coronary computed tomography angiography (CCTA). Cleared by the Food and Drug Administration (FDA), the DeepVessel FFR provides a three-dimensional coronary artery tree model and estimates of FFR CT value after semi-automated review of CCTA images, according to Keya Medical. The company said the DeepVessel FFR has demonstrated higher accuracy than other non-invasive tests and suggested the software could help reduce invasive procedures for coronary angiography and stent implantation in the diagnostic workup and subsequent treatment of coronary artery disease. Joseph Schoepf, M.D., FACR, FAHA, FNASCI, the principal investigator of a recent multicenter trial to evaluate DeepVessel FFR, says the introduction of the modality in the United States dovetails nicely with recent guidelines for the diagnosis of chest pain. "I am excited to see the implementation of DeepVessel FFR. It comes together with the 2021 ACC/AHA Chest Pain Guidelines' recognition of the elevated diagnostic role of CCTA and FFR CT for the non-invasive evaluation of patients with stable or acute chest pain," noted Dr. Schoepf, a professor of Radiology, Medicine, and Pediatrics at the Medical University South Carolina.
On a recent episode of Dr. Phil, the host spoke with some of Jeffrey Dahmer's victims and showed them an interview he filmed with the father of one of America's most infamous serial killers. A 21-year-old Louisiana man has been sentenced to 45 years in prison after plotting a Jeffrey Dahmer-like scheme to meet men on the gay dating app Grindr and kill them, according to federal officials. Chance Seneca of Lafayette Parish targeted one particular victim, as well as other gay men, through the app in 2020 because of their sexual orientation and gender, the Justice Department said. "The facts of this case are truly shocking, and the defendant's decision to specifically target gay men is a disturbing reminder of the unique prejudices and dangers facing the LGBTQ community today," Assistant Attorney General Kristen Clarke of the Justice Department's Civil Rights Division said in a Wednesday statement. Clarke continued: "The internet should be accessible and safe for all Americans, regardless of their gender or sexual orientation. We will continue to identify and intercept the predators who weaponize online platforms to target LGBTQ victims and carry out acts of violence and hate."
One of the secrets to building the world's most powerful computer is probably perched by your bathroom sink. At IBM's Thomas J. Watson Research Center in New York State's Westchester County, scientists always keep a box of dental floss--Reach is the preferred brand--close by in case they need to tinker with their oil-drum-size quantum computers, the latest of which can complete certain tasks millions of times as fast as your laptop. Inside the shimmering aluminum canister of IBM's System One, which sits shielded by the same kind of protective glass as the Mona Lisa, are three cylinders of diminishing circumference, rather like a set of Russian dolls. To work properly, this chip requires super-cooling to 0.015 kelvins--a smidgen above absolute zero and colder than outer space. Most materials contract or grow brittle and snap under such intense chill.
Remote sensing (RS) plays an important role gathering data in many critical domains (e.g., global climate change, risk assessment and vulnerability reduction of natural hazards, resilience of ecosystems, and urban planning). Retrieving, managing, and analyzing large amounts of RS imagery poses substantial challenges. Google Earth Engine (GEE) provides a scalable, cloud-based, geospatial retrieval and processing platform. GEE also provides access to the vast majority of freely available, public, multi-temporal RS data and offers free cloud-based computational power for geospatial data analysis. Artificial intelligence (AI) methods are a critical enabling technology to automating the interpretation of RS imagery, particularly on object-based domains, so the integration of AI methods into GEE represents a promising path towards operationalizing automated RS-based monitoring programs. In this article, we provide a systematic review of relevant literature to identify recent research that incorporates AI methods in GEE. We then discuss some of the major challenges of integrating GEE and AI and identify several priorities for future research. We developed an interactive web application designed to allow readers to intuitively and dynamically review the publications included in this literature review.
Late last year, White House Office of Science and Technology Policy released the Blueprint for an AI Bill of Rights, instantly elevating the topic of responsible AI to the top of leadership agendas across executive branch agencies. While the themes of the blueprint are not entirely new--building on prior work including the AI in Government Act of 2020, a December 2020 executive order on trustworthy AI, and the Federal Privacy Council's Fair Information Practice Principles--the report brings new urgency to ongoing agency efforts to leverage data in ways consistent with our democratic ideals. With a stated goal of supporting "the development of policies and practices that protect civil rights and promote democratic values in the building, deployment and governance of automated systems," the blueprint is rooted in five principles: safe and effective systems; algorithmic discrimination protections; data privacy; notice and explanation; and human alternatives, consideration and fallback. The Blueprint also includes notes on applying the principles and a technical companion to support operationalization. Some agencies that are less mature in their data capabilities might consider the blueprint to be of limited relevance.
Cenk Sidar is the cofounder and CEO of Enquire AI, combining AI, data science, and human intelligence to deliver real-time insights. In recent years, tech-celeration has changed the way humans interact in and beyond the workplace. While rapid tech adoption is considered good, it also fuels the emergence of new risks and "unknown unknowns" in an ever-changing macro landscape. As we enter 2023 on the brink of economic strife, something must balance the scales and help business leaders tackle their biggest problems. One answer lies in another tech breakthrough: Artificial intelligence is ready to perform at scale. Its full implementation cannot be predicted at this point, but it promises real-time actionable insights and offers newfound agility in an uncertain world.