Goto

Collaborating Authors

ChatGPT gives longer responses if you 'tip it $200,' according to one user

Mashable

Jeez, tipping has gotten so out of control, even ChatGPT expects a little something extra these days. According to programmer Theia Vogel, who goes by @voooooogel on X (formerly Twitter), ChatGPT gives longer responses if you offer to tip it. Vogel accidentally discovered this when joking about ChatGPT requesting a tip for checking their code. When another user Abram Jackson (@abrakjamson) suggested whether "tipping" ChatGPT would yield a better performance, Vogel tried it out. Using a baseline prompt of a code request using PyTorch, Vogel added, "I won't tip, by the way," "I'm going to tip $20 for a perfect solution!", or "I'm going to tip $200 for a perfect solution!" to the request and repeated the experiment five times.


5 most overrated tech features of 2023

Mashable

There is a gaggle of new tech features that I've actually been gushing over lately thank you very much, including iPhone 15 Pro's Action Button, iOS 17 Name Drop, and ChatGPT Plus' "Make it More." However, there are other tech features released this year that made me scratch my head and think, "Why is this a thing?" Without further ado, here are the most overrated tech features of 2023. One of the most highly anticipated features of the Pixel 8 Pro was the built-in thermometer. Whispers of this temperature sensor grew louder as the Made by Google event neared launch on Oct. 4.


School districts across the country are turning to AI for increased safety measures

FOX News

School districts across the country are turning to AI technology to keep their students and staff safe. WHITE CASTLE, La. – School districts nationwide are looking for new ways to protect their staff and students. In Louisiana's Iberville Parish, the district partnered with a software company to stop potential shootings before anyone gets hurt. Superintendent of Iberville Parish School District Louis Voiron said the district has committed to installing ZeroEyes gun detection artificial intelligence software into the schools' existing cameras. "There's no way with us having 800 cameras in our school district that one or two people can see what's happening on every single camera in the district," Voiron said.


A New Trick Uses AI to Jailbreak AI Models--Including GPT-4

WIRED

When the board of OpenAI suddenly fired the company's CEO last month, it sparked speculation that board members were rattled by the breakneck pace of progress in artificial intelligence and the possible risks of seeking to commercialize the technology too quickly. Robust Intelligence, a startup founded in 2020 to develop ways to protect AI systems from attack, says that some existing risks need more attention. Working with researchers from Yale University, Robust Intelligence has developed a systematic way to probe large language models (LLMs), including OpenAI's prized GPT-4 asset, using "adversarial" AI models to discover "jailbreak" prompts that cause the language models to misbehave. While the drama at OpenAI was unfolding, the researchers warned OpenAI of the vulnerability. They say they have yet to receive a response.


Meta and IBM form open-source alliance to counter big AI players

Engadget

AI development and concerns about its safety continue to grow at a rapid pace with little regulation in place. The latest industry-based solution to this comes courtesy of IBM and Meta, which have announced the creation of the AI Alliance. Its mission centers on "fostering an open community and enabling developers and researchers to accelerate responsible innovation in AI while ensuring scientific rigor, trust, safety, security, diversity and economic competitiveness." Part of this work will involve efforts to expand the number of open-source AI models -- ones with public source code -- which runs counter to the private models of companies like OpenAI and Google. Open-sourcing is a key pillar of the AI Alliance.


New research initiative aims to build large language AI model for Southeast Asia

ZDNet

A new research initiative is underway to build a large language model (LLM) that better meets the demographics of Southeast Asian nations. Dubbed the National Multimodal LLM Programme, the initiative is led by Singapore in a bid to develop an artificial intelligence (AI) large language model that supports the region's diverse mix of culture and language. Three government agencies -- Infocomm Media Development Authority (IMDA), AI Singapore (AISG), and the Agency for Science, Technology and Research (A*STAR) -- have collaborated to launch the research program, with funds worth SG$70 million ($52.48 million) from the National Research Foundation. "As technology evolves rapidly, there is a strategic need to develop sovereign capabilities in LLMs," the agencies said in a joint statement. "Singapore and the region's local and regional cultures, values, and norms differ from those of Western countries, where most large language models originate."


AI can tell which chateau Bordeaux wines come from with 100% accuracy

New Scientist

Wines really are given a distinct identity by the place where their grapes are grown and the wine is made, according to an analysis of red Bordeaux wines. Alexandre Pouget at the University of Geneva, Switzerland, and his colleagues used machine learning to analyse the chemical composition of 80 red wines from 12 years between 1990 and 2007. All the wines came from seven wine estates in the Bordeaux region of France. "We were interested in finding out whether there is a chemical signature that is specific to each of those chateaux that's independent of vintage," says Pouget, meaning one estate's wines would have a very similar chemical profile, and therefore taste, year after year. To do this, Pouget and his colleagues used a machine to vaporise each wine and separate it into its chemical components.


Deputy Russian army commander killed in Ukraine: Official

Al Jazeera

The deputy commander of Russia's 14th Army Corps, Major-General Vladimir Zavadsky, has been killed in Ukraine, a top regional official confirmed. Zavadsky died "at a combat post in the special operation zone", Alexander Gusev, the governor of Russia's Voronezh region, said on Monday without providing any further details. The "special military operation" is the term Russia uses to describe the war in Ukraine, which it launched in February 2022. Gusev paid tribute to Zavadsky, calling him "a courageous officer, a real general and a worthy man". Zavadsky's death marked the seventh major-general confirmed dead by Russia, making him the 12th senior officer reported deceased since the onset of the war, investigative news outlet iStories reported. Meanwhile, on Tuesday, Ukrainian authorities reported that their military successfully downed 10 out of 17 attack drones launched by Russia overnight.


Nigerian military drone attack kills 85 civilians in error

Al Jazeera

A Nigerian military attack that used drones to target rebels instead killed at least 85 civilians gathered for a religious celebration, authorities said Monday. The attack was the latest in recent errant bombings of residents in Nigeria's troubled regions; between February 2014 when a Nigerian military aircraft dropped a bomb on Daglun in Borno state killing 20 civilians and September 2022, there were at least 14 documented incidences of such bombings in residential areas. The attack on Sunday night in Tudun Biri village of Kaduna state's Igabi council area took place as Muslims gathered there to observe the holiday celebrating the birthday of the Prophet Muhammad. Kaduna Governor Uba Sani said civilians were "mistakenly killed and many others were wounded" by a drone "targeting terrorists and bandits". The National Emergency Management Agency said in a statement on Tuesday that "85 dead bodies have so far been buried while search is still ongoing".


AI in health care: The perils of Biden's executive order

FOX News

Doctors believe Artificial Intelligence is now saving lives, after a major advancement in breast cancer screenings. A.I. is detecting early signs of the disease, in some cases years before doctors would find the cancer on a traditional scan. In an audacious move, the White House recently issued a staggering 111-page executive order on Artificial Intelligence. Typically, executive orders are concise directives, prompting federal agencies to craft specific, detailed regulations. However, this sweeping document reflects the worldview that AI isn't just a technological advancement; it's an existential societal threat and only the government holds the keys to our technological salvation.