Goto

Collaborating Authors

 preventing


AI-Invented Tonal Languages: Preventing a Machine Lingua Franca Beyond Human Understanding

Noever, David

arXiv.org Artificial Intelligence

This paper investigates the potential for large language models (LLMs) to develop private tonal languages for machine-to-machine (M2M) communication. Inspired by cryptophasia in human twins (affecting up to 50% of twin births) and natural tonal languages like Mandarin and Vietnamese, we implement a precise character-to-frequency mapping system that encodes the full ASCII character set (32-126) using musical semitones. Each character is assigned a unique frequency, creating a logarithmic progression beginning with space (220 Hz) and ending with tilde (50,175.42 Hz). This spans approximately 7.9 octaves, with higher characters deliberately mapped to ultrasonic frequencies beyond human perception (>20 kHz). Our implemented software prototype demonstrates this encoding through visualization, auditory playback, and ABC musical notation, allowing for analysis of information density and transmission speed. Testing reveals that tonal encoding can achieve information rates exceeding human speech while operating partially outside human perceptual boundaries. This work responds directly to concerns about AI systems catastrophically developing private languages within the next five years, providing a concrete prototype software example of how such communication might function and the technical foundation required for its emergence, detection, and governance.


Preventing Your AI Bot From Getting Sued

#artificialintelligence

There is a clear gap in the space between the time that a management consultancy gets the CEO of a financial institution to sign off on some ambitious multi-year plan to put some antiquated process to rest. The consultants, never wanting to dirty their hands with anything that isn't a strategy, leave about as quickly as they came, forcing the bank, and its change function, if it has one, to turn to one of the big four accounting firms. They then send in hordes of fresh university graduates who steadily proceed to chip away at the original targets of the change plan, usually leaving the bank with something utterly unrecognizable. Nischal Tanna, who previously spent years in transformation functions at major US and Singaporean banks, is trying to come up with a better way of doing things. In an interview with finews.asia, the CEO of Transform hub said he tries to join both ends of the consulting and execution functions more effectively by using AI and focusing on particular implementation phases.


Preventing an AI-related catastrophe - Problem profile

#artificialintelligence

Why is it that humans, and not chimpanzees, control the fate of the world? Humans have shaped every corner of our planet. Chimps, despite being pretty smart compared to other nonhuman animals, have not. This is (roughly) because of humans' intelligence.1 Companies and governments are spending billions of dollars a year developing AI systems -- and as these systems grow more advanced, they could (eventually) displace humans as the most intelligent things on the planet. As we'll see, they're making progress.


How Artificial Intelligence is Preventing the Spread of Infectious Diseases?

#artificialintelligence

Artificial intelligence can possibly oversee huge volumes of information to make reasonable examples for human arrangement and navigation. It can deal with information across various areas, which is incredibly dreary and tedious for people. This capacity of artificial intelligence to acclimatize information, digest and examine it to anticipate future pandemics and sickness spreads is fundamental. Technological change is formed and organized by cultural standards and relations, which are thus impacted by technological changes. An abundance of new technologies are opening up for quick subatomic distinguishing proof of microbes, yet additionally for the more exact observation of irresistible infections.


AI: Preventing a Frankenstein's monster

#artificialintelligence

One of the key lessons taught by Mary Shelley's infamous story of Frankenstein's monster is that things aren't always greater than the sum of their parts, regardless of the quality of the parts themselves An altogether less visceral but equally composition-based process goes into building today's artificial intelligence (AI) platforms. One of the most powerful AI models used today is deep learning, a machine learning algorithm that identifies patterns in different sets of input data, and uses them to generate insights that help inform human decision-making. Deep learning applies vast layers of artificial neural networks to data, creating a'black box' of calculations that are impossible for humans to understand. Luckily for data scientists, preventing the creation of a'monster' when developing AI requires an understanding of data validity, rather than the supernatural. AI platforms built on deep learning assume that more data equals better accuracy.


Artificially Intelligent Cars Are Getting Better at Preventing Your Death

#artificialintelligence

Researchers have developed a new early-warning system for self-driving vehicles -- leveraging artificial intelligence (AI) capable of learning from thousands of real traffic scenarios, according to a new study executed with the BMW Group and published in the journal IEEE Transactions on Intelligent Transportation Systems. In other words, you may soon ride in a self-driving car with an AI's figurative finger on the buzzer -- to keep you from dying in transit by giving seven seconds' warning of crucial situations the cars can't handle on their own. And so far, the AI can do it with more than 85% accuracy. The drive to increase safety for self-driving cars feels almost self-explanatory, but efforts typically rely on complicated models designed to enhance vehicles' ability to analyze the traffic behavior of users. But driving on public roads always comes with risk and uncertainty.


6 Chatbot Myths That Are Preventing Your Business From Growing

#artificialintelligence

Even though the history of chatbots goes back into the 1960s, chatbots are one of the latest developments in technology that all businesses should consider using. When properly implemented, chatbots can help to improve employee engagement, as well as customer experience. Yet, many businesses are still unable to fully reap the benefits of this technology, partly due to a number of common chatbot myths. Here we debunk six popular chatbot myths. One of the biggest misconceptions is the notion that chatbots are only applicable to specific industries, most commonly, that they only deliver results in customer service sectors.


Preventing the Spread of Invasive Species Using AI

#artificialintelligence

Harry Butler Institute, Murdoch University, are helping to protect Australia's biosecurity. In a first of its kind application in Western Australia, AI technology in the field - using IBM Power-9 based hardware and PowerAI Vision technology - is providing scientists with real-time profile of biosecurity threats within seconds, helping them to identify invasive species, even when distinguishing features may not be visible to the human eye.


Why Deep Learning Is the Only Option for Preventing The Threats of Tomorrow - Cyber Startup Observatory

#artificialintelligence

From ransomware, to spyware and banking trojans, the types of malware threats are many. Yet the one threat that seems to be posing the greatest challenge to organizations and their cybersecurity solutions is the unknown malware. This is because the means that have most typically been used to counter cyber threats to date remain insufficient when it comes to unknowns: Signature-based: traditional antivirus solutions usually lean on such technique. With this technique, key data from given files is signed, and so for next times files are matched with the same signature, those will be classified as malicious. The key data is usually strings, or other byte sequences of code piece.


Preventing an AI Apocalypse by Seth Baum

#artificialintelligence

The idea that the type of reset and rule-making you suggest is possible, or even could have been possible if different choices had been made in the past, is an illusion. A very simple illustration of this: look through all the articles you can find on PS that talk about "regluation" of different aspects of tech and biotech (and there are quite a few), and you will notice that literally *all of them* will pontificate that *something* should be done, or occasionally outline in broad-brush terms *what* should be done, but *not a single one* will ever suggest *how* it can be done, as in, laws that are enforcable and policable, how the penalties would work, etc. Instead, it's always all very Jean Luc Picard: "Make it happen, make it so". And it's not just writers on PS; as someone who has been following this debate for a very long time, I haven't found anyone put forward ideas about regulation of tech that can demonstrably work. If bodies do eventually get to the point of passing laws, they are of "Cookie Consent" variety - misdirected and futile nonsense.