babysitter
Watermarking Needs Input Repetition Masking
Khachaturov, David, Mullins, Robert, Shumailov, Ilia, Dathathri, Sumanth
Recent advancements in Large Language Models (LLMs) raised concerns over potential misuse, such as for spreading misinformation. In response two counter measures emerged: machine learning-based detectors that predict if text is synthetic, and LLM watermarking, which subtly marks generated text for identification and attribution. Meanwhile, humans are known to adjust language to their conversational partners both syntactically and lexically. By implication, it is possible that humans or unwatermarked LLMs could unintentionally mimic properties of LLM generated text, making counter measures unreliable. In this work we investigate the extent to which such conversational adaptation happens. We call the concept $\textit{mimicry}$ and demonstrate that both humans and LLMs end up mimicking, including the watermarking signal even in seemingly improbable settings. This challenges current academic assumptions and suggests that for long-term watermarking to be reliable, the likelihood of false positives needs to be significantly lower, while longer word sequences should be used for seeding watermarking mechanisms.
- Asia > Middle East > Jordan (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (3 more...)
The robot will see you now: Why experts say AI in health care is not to fear
Editor's note: This is part of a KSL.com series looking at the rise of artificial intelligence technology tools such as ChatGPT, the opportunities and risks they pose and what impacts they could have on various aspects of our daily lives. In the 1992 movie "Wayne's World," the character Garth is working on a robotic arm when Benjamin comes to ask him about making a change to his show. "We fear change," Garth says. He then looks down at the mechanical hand and begins to repeatedly smash it with a hammer. Many Americans have a similar reaction to change and technology, especially when it comes to using artificial intelligence in health care.
Why Government Needs More Women in AI
Women in tech can supercharge teams' creativity and help them stay under budget, meet deadlines and improve outcomes, studies show, so it's time for more women to pursue tech careers, according to a lead Department of Labor official speaking at GovernmentCIO Media & Research's Women Tech Leaders event Thursday. Kathy McNeill, who leads emerging technology strategy at the agency, said the federal government needs more women in AI to produce accurate data sets and data analysis. "AI is a reflection of those who develop it and the data sets we use," she said during a fireside chat. McNeill provided an example of how Google Translate took the phrase "she is a doctor and he is a babysitter" and translated it to "he is a doctor and she is a babysitter" in another language, to illustrate biases inherent in artificially intelligent algorithms. "A lot of systems were developed 10 to 20 years ago," she said.
Contextually Intelligent NLP Assistants – AI's Next Big Technical Challenge
Summary: Contextually intelligent, NLP-based interactive assistants are one of the next big things for AI/ML. The tech is already here from recommendation engines. The need to be more efficient and to become AI-augmented in our decision making is now. Getting the contextual awareness is the hard part. Last week we took the position that from a technical standpoint, 'deeply inclusive and contextually sensitive' AI is one of the two'next big things' in AI.
Robots vs. Babysitters: Is Artificial Intelligence the Hot New Choice for Child Care?
AI-powered baby monitors are not marketed as a babysitter replacement but rather a supplement for working parents. Device-assisted child care is an almost century-old concept. The world's first electronic baby monitor, the Bakelite Zenith Radio Nurse, went on sale in the late 1930s--a response, at least in part, to the moral panic following the kidnapping and subsequent murder of the Lindbergh baby. Thus, using artificial intelligence (AI) to assist or relieve parents entirely of the burdens of nurturing is not an abrupt or unanticipated innovation--many parents already monitor their children remotely using cameras connected to their smartphones, sometimes with unanticipated and extremely creepy results--and, given the cost and difficulty of securing reliable human babysitters, it may also be an easy sell. SEE ALSO: What Happens When'Generation Voice' Grows Up? Enter Turkey-based startup Invidyo and its AI-powered "smart baby and babysitter camera."
- Asia > Middle East > Republic of Türkiye (0.25)
- North America > United States > Oregon (0.05)
On Point: Using artificial intelligence to find babysitters
Finding a trusted babysitter or caregiver is a challenge that plagues many parents; but could artificial intelligence narrow the options? One new service claims it can help parents refine their choices by digging into a potential caregiver's social media past. The company's technology claims to assess someone's online behavior and give parents a risk score to know what kind of person they are. Local parents we spoke with said they're unsure of its benefits. And Predictim has experienced some criticism, to the point that it has paused its operations.
- Information Technology > Services (0.49)
- Health & Medicine > Therapeutic Area (0.36)
Wanted: The 'perfect babysitter.' Must pass AI scan for respect and attitude.
When Jessie Battaglia started looking for a new babysitter for her 1-year-old son, she wanted more information than she could get from a criminal-background check, parent comments and a face-to-face interview. So she turned to Predictim, an online service that uses "advanced artificial intelligence" to assess a babysitter's personality, and aimed its scanners at one candidate's thousands of Facebook, Twitter and Instagram posts. The system offered an automated "risk rating" of the 24-year-old woman, saying she was at a "very low risk" of being a drug abuser. But it gave a slightly higher risk assessment -- a 2 out of 5 -- for bullying, harassment, being "disrespectful" and having a "bad attitude." The system didn't explain why it had made that decision.
- North America > United States > Kentucky (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
Babysitter screening app Predictim uses AI to sniff out bullies
If you're a parent with young kids, you probably know how arduous it can be to screen a babysitter. And among those who have hired one, a whopping 62 percent didn't bother to check their references. That spurred Sal Parsa and Joel Simonoff, the cofounders of Berkeley startup Predictim, to develop a no-frills solution that taps artificial intelligence (AI) to generate personality assessments from digital footprints. The eponymous Predictim platform, which launches today, uses natural language processing (NLP) and computer vision algorithms to sift through social media posts -- including tweets, Facebook posts, and Instagram photos -- for warning signs. "The current background checks parents generally use don't uncover everything that is available about a person. Interviews can't give a complete picture," Parsa said.
AI and gender bias – who watches the watchers? IDG Connect
Artificial intelligence (AI) and machine learning are causing excitement all over the world. Recent reports, such as one from Accenture, claim it has the potential to revolutionise the future of all businesses operations. For instance, research tasks that take hundreds of hours, such as candidate profiling, can now be performed by an AI within seconds. It's no wonder that many businesses are tapping into this trend – the potential savings, in both time and money, are extraordinary. However, what are the consequences of programming AI in today's environment?
- Europe > United Kingdom (0.16)
- North America > United States > Virginia (0.05)
Press A to change your life: 'Otis' and the new American cinema
Everything we experience is filtered through thick veils of of personal baggage, self-interest and delusion, constantly skewing the world into the most comforting state possible. Universes of fragile concepts stand between what you think happened and what actually happened. It's an interactive crime drama that allows the audience to shift perspectives among three characters at will, telling a single story from disparate points of view. In the free online prototype, viewers press A, S or D on the keyboard to instantly swap perspectives among a babysitter, a father and a man intent on robbing their house. Otis doesn't pause when the perspective changes; the story carries on for all three characters.
- North America > United States > West Virginia (0.05)
- North America > United States > New York > Kings County > New York City (0.05)
- North America > United States > District of Columbia > Washington (0.05)
- Media > Film (1.00)
- Leisure & Entertainment > Games > Computer Games (0.52)
- Information Technology > Communications > Social Media (0.52)
- Information Technology > Artificial Intelligence > Games (0.42)