Goto

Collaborating Authors

 walsh


Sutton's predictions v The Coral, Starsailor, Picture Parlour & AI

BBC News

Forget Liverpool against Everton or Arsenal against Manchester City, the big fixture this weekend is Chris Sutton versus AI chatbot Copilot. Sutton claimed on this week's Monday Night Club that he is more intelligent than AI and the stats do support him, at least when it comes to football scores anyway. He is top of the BBC predictions league table (see bottom of page) so far, with three wins in the first four weeks - and he has beaten AI every time. I am 4-0 up on AI - I don't know if it feels pressure, but it should, Sutton said. Also, why is it not going to the best source? AI, however, does not agree that Sutton has superior intelligence, and seems to suggest that it is a bit early for him to be celebrating. When asked'are you worried that Chris Sutton is cleverer than you?' Copilot replied not in the slightest.


AI 'Nudify' Websites Are Raking in Millions of Dollars

WIRED

For years, so-called "nudify" apps and websites have mushroomed online, allowing people to create nonconsensual and abusive images of women and girls, including child sexual abuse material. Despite some lawmakers and tech companies taking steps to limit the harmful services, every month, millions of people are still accessing the websites, and the sites' creators may be making millions of dollars each year, new research suggests. An analysis of 85 nudify and "undress" websites--which allow people to upload photos and use AI to generate "nude" pictures of the subjects with just a few clicks--has found that most of the sites rely on tech services from Google, Amazon, and Cloudflare to operate and stay online. The findings, revealed by Indicator, a publication investigating digital deception, say that the websites had a combined average of 18.5 million visitors for each of the past six months and collectively may be making up to 36 million per year. Alexios Mantzarlis, a cofounder of Indicator and an online safety researcher, says the murky nudifier ecosystem has become a "lucrative business" that "Silicon Valley's laissez-faire approach to generative AI" has allowed to persist.


OpenAI and the FDA Are Holding Talks About Using AI In Drug Evaluation

WIRED

The Food and Drug Administration has been meeting with OpenAI to discuss the agency's use of AI, according to sources with knowledge of the meetings. The meetings appear to be part of a broader effort at the FDA to use this technology to speed up the drug approval process. "Why does it take over 10 years for a new drug to come to market?" "Why are we not modernized with AI and other things? We've just completed our first AI-assisted scientific review for a product and that's just the beginning."


The Trump administration could reverse progress on AI regulation

Al Jazeera

While efforts to regulate the creation and use of artificial intelligence (AI) tools in the United States have been slow to make gains, the administration of President Joe Biden has attempted to outline how AI should be used by the federal government and how AI companies should ensure the safety and security of their tools. The incoming Trump administration, however, has a very different view on how to approach AI, and it could end up reversing some of the progress that has been made over the past several years. President Biden signed an executive order in October 2023 that was meant to promote the "safe, secure, and trustworthy development and use of artificial intelligence" within the federal government. President-elect Donald Trump has promised to repeal that executive order, saying it would hinder innovation. Biden was also able to get seven leading AI companies to agree to guidelines for how AI should be safely developed going forward.


AI Is Taking Water From the Desert

The Atlantic - Technology

One scorching day this past September, I made the dangerous decision to try to circumnavigate some data centers. The ones I chose sit between a regional airport and some farm fields in Goodyear, Arizona, half an hour's drive west of downtown Phoenix. When my Uber pulled up beside the unmarked buildings, the temperature was 97 degrees Fahrenheit. The air crackled with a latent energy, and some kind of pulsating sound was emanating from the electric wires above my head, or maybe from the buildings themselves. With no shelter from the blinding sunlight, I began to lose my sense of what was real. Microsoft announced its plans for this location, and two others not so far away, back in 2019--a week after the company revealed its initial 1 billion investment in OpenAI, the buzzy start-up that would later release ChatGPT.


Volume Transfer: A New Design Concept for Fabric-Based Pneumatic Exosuits

Liu, Chendong, Yang, Dapeng, Chen, Jiachen, Dai, Yiming, Jiang, Li, Liu, Hong

arXiv.org Artificial Intelligence

The fabric-based pneumatic exosuit is now a hot research topic because it is lighter and softer than traditional exoskeletons. Existing research focused more on the mechanical properties of the exosuit (e.g., torque and speed), but less on its wearability (e.g., appearance and comfort). This work presents a new design concept for fabric-based pneumatic exosuits Volume Transfer, which means transferring the volume of pneumatic actuators beyond the garments profile to the inside. This allows for a concealed appearance and a larger stress area while maintaining adequate torques. In order to verify this concept, we develop a fabric-based pneumatic exosuit for knee extension assistance. Its profile is only 26mm and its stress area wraps around almost half of the leg. We use a mathematical model and simulation to determine the parameters of the exosuit, avoiding multiple iterations of the prototype. Experiment results show that the exosuit can generate a torque of 7.6Nm at a pressure of 90kPa and produce a significant reduction in the electromyography activity of the knee extensor muscles. We believe that Volume Transfer could be utilized prevalently in future fabric-based pneumatic exosuit designs to achieve a significant improvement in wearability.


Are killer robots the future of war?

Al Jazeera

Humanity stands on the brink of a new era of warfare. Driven by rapid developments in artificial intelligence, weapons platforms that can identify, target and decide to kill human beings on their own -- without an officer directing an attack or a soldier pulling the trigger -- are fast transforming the future of conflict. Officially, they are called lethal autonomous weapons systems (LAWS), but critics call them killer robots. Many countries, including the United States, China, the United Kingdom, India, Iran, Israel, South Korea, Russia and Turkey, have invested heavily in developing such weapons in recent years. A United Nations report suggests that Turkish-made Kargu-2 drones in fully-automatic mode marked the dawn of this new age when they attacked combatants in Libya in 2020 amid that country's ongoing conflict. Autonomous drones have also played a crucial role in the war in Ukraine, where both Moscow and Kyiv have deployed these uncrewed weapons to target enemy soldiers and infrastructure.


The pause AI movement is remarkable, but won't work

#artificialintelligence

The open letter calling for an immediate six-month pause in the AI development arms race and signed by more than 1600 tech luminaries, researchers and responsible technology advocates under the umbrella of the Future of Life Institute is stunning on its face. Self-reflection and caution have never been defining qualities of technology sector leaders. Outside of nuclear technology, it's hard to identify another time when so many have publicly rallied to slow the pace of technology development down, much less call for government regulation and intervention. "Advanced AI could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources," the letter states. "Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control. "Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than (Open AI's) GPT-4.


College student claims app can detect essays written by chatbot ChatGPT

The Guardian

A 22-year-old college student has developed an app which he claims can detect whether text is written by ChatGPT, the explosive chatbot raising fears of plagiarism in academia. Edward Tian, a senior at Princeton University, developed GPTZero over a summer break. It had 30,000 hits within a week of its launch. Tian said the motivation was to address the use of artificial intelligence to evade anti-plagiarism software to cheat in exams with quick and credible academic writing. His initial tweet, which claimed the app could "quickly and efficiently" detect whether an essay had been written by artificial intelligence, went viral with more than 5m views.


Australian universities to return to 'pen and paper' exams after students caught using AI to write essays

#artificialintelligence

Australian universities have been forced to change the way they run exams and other assessments amid fears students are using emerging artificial intelligence software to write essays. Major institutions have added new rules which state that the use of AI is cheating, with some students already caught using the software. But one AI expert has warned universities are in an "arms race" they can never win. ChatGPT, which generates text on any subject in response to a prompt or query, was launched in November by OpenAI and has already been banned across all devices in New York's public schools due to concerns over its "negative impact on student learning" and potential for plagiarism. In London, one academic tested it against a 2022 exam question and said the AI's answer was "coherent, comprehensive and sticks to the points, something students often fail to do", adding he would have to "set a different kind of exam" or deprive students of internet access for future exams. In Australia, academics have cited concerns over ChatGPT and similar technology's ability to evade anti-plagiarism software while providing quick and credible academic writing.