broussard
Children should starting using AI at 6 years old so they don't become the lost generation of workers, expert recommends
To keep children from becoming the lost generation of workers, an expert has recommended that parents teach them to use AI at the age of six. Ed Broussard, Managing director at Tomoro AI, helps companies navigate a market powered by artificial intelligence and has shared skills the younger generation will need to live in a world that is quickly being engulfed by it. The AI expert has shared other skills children will need such as being able to think without the internet and focusing on jobs that do not currently exist. 'I often joke with clients, the best person to hire into their firm is the person who just cheated on their university exams using AI - they've already learned how to use AI to get great results,' said Broussard. He added" 'Employers of the future will need native AI users, where utilizing AI to work faster, better and smarter is second nature.
Rise of the AI 'agents': How 'synthetic employees' are going to affect 'every office worker' by 2030, according to man developing them for ChatGPT creator Sam Altman
Imagine the dream employee: They don't take breaks, go on vacation or request meetings. For some industries, this type of worker could soon be hired. In recent months several companies have announced they are building AI agents, or'synthetic employees.' These digital workers could upend the workplace as we know it - answering emails, organizing invoices, responding to customer service inquiries and managing a calendar - possibly doing away with admin employees or pricey third-party technology. Mr Broussard, whose company works with Sam Altman's OpenAI, told DailyMail.com the next two years will see leaps and bounds of progress with these types of workers.
- Health & Medicine (1.00)
- Information Technology > Software (0.31)
'Very wonderful, very toxic': how AI became the culture war's new frontier
When Elon Musk introduced the team behind his new artificial intelligence company xAI last month, the billionaire entrepreneur took a question from the rightwing media activist Alex Lorusso. ChatGPT had begun "editorializing the truth" by giving "weird answers like that there are more than two genders", Lorusso posited. Was that a driver behind Musk's decision to launch xAI, he wondered. "I do think there is significant danger in training AI to be politically correct, or in other words training AI to not say what it actually thinks is true," Musk replied. His own company's AI on the other hand, would be "maximally true" he had said earlier in the presentation.
- North America > Canada > Ontario > Toronto (0.15)
- North America > United States > New York (0.05)
- North America > United States > Iowa (0.05)
- Asia > Middle East > Jordan (0.05)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.79)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.55)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.55)
On the Origins of Bias in NLP through the Lens of the Jim Code
Elsafoury, Fatma, Abercrombie, Gavin
In this paper, we trace the biases in current natural language processing (NLP) models back to their origins in racism, sexism, and homophobia over the last 500 years. We review literature from critical race theory, gender studies, data ethics, and digital humanities studies, and summarize the origins of bias in NLP models from these social science perspective. We show how the causes of the biases in the NLP pipeline are rooted in social issues. Finally, we argue that the only way to fix the bias and unfairness in NLP is by addressing the social problems that caused them in the first place and by incorporating social sciences and social scientists in efforts to mitigate bias in NLP models. We provide actionable recommendations for the NLP research community to do so.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Europe > Ireland > Leinster > County Dublin > Dublin (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (12 more...)
- Law > Civil Rights & Constitutional Law (1.00)
- Government (1.00)
- Information Technology (0.93)
'It doesn't work': Migrants struggle with US immigration app
Tijuana, Mexico – Standing in a common area of the Casa del Migrante shelter in the Mexican border city of Tijuana, Maria taps her phone screen but can't get the app she is using to work. Maria and her family fled their native Haiti to Venezuela years ago. But recent Venezuelan economic and political instability forced them to leave that country, too, and she said they are now hoping to apply for asylum in the United States. But she and her husband and daughter have tried every day for the last month to get a US immigration appointment through the country's new CBP One app -- to no avail. And without a CBP One appointment, the family faces steep consequences should they try to cross the border irregularly, including being deported back to Haiti and barred from entering the US for up to five years.
- North America > United States (1.00)
- North America > Haiti (0.76)
- North America > Mexico (0.60)
- (2 more...)
AI expert Meredith Broussard: 'Racism, sexism and ableism are systemic problems'
Meredith Broussard is a data journalist and academic whose research focuses on bias in artificial intelligence (AI). She has been in the vanguard of raising awareness and sounding the alarm about unchecked AI. Her previous book, Artificial Unintelligence (2018), coined the term "technochauvinism" to describe the blind belief in the superiority of tech solutions to solve our problems. She appeared in the Netflix documentary Coded Bias (2020), which explores how algorithms encode and propagate discrimination. Her new book is More Than a Glitch: Confronting Race, Gender and Ability Bias in Tech.
- Law > Civil Rights & Constitutional Law (0.53)
- Media > News (0.36)
- Health & Medicine > Therapeutic Area > Oncology (0.31)
AI Expert: We Should Stop Using So Much AI
Meredith Broussard is unusually well placed to dissect the ongoing hype around AI. She's a data scientist and associate professor at New York University, and she's been one of the leading researchers in the field of algorithmic bias for years. And though her own work leaves her buried in math problems, she's spent the last few years thinking about problems that mathematics can't solve. Her reflections have made their way into a new book about the future of AI. In More than a Glitch, Broussard argues that we are consistently too eager to apply artificial intelligence to social problems in inappropriate and damaging ways. Her central claim is that using technical tools to address social problems without considering race, gender, and ability can cause immense harm.
The Download: biased AI warnings, and experimental CRISPR therapies
Meredith Broussard is unusually well placed to dissect the ongoing hype around AI. She's a data scientist and associate professor at New York University, and she's been one of the leading researchers in the field of algorithmic bias for years. And though her own work leaves her buried in math problems, she's spent the last few years thinking about problems that mathematics can't solve. Broussard argues that we are consistently too eager to apply artificial intelligence to social problems in inappropriate and damaging ways--particularly when race, gender, and ability is not taken into consideration. Broussard spoke with our senior tech policy reporter Tate Ryan-Mosley about the problems with the use of technology by police, the limits of "AI fairness," and the solutions she sees for some of the challenges AI is posing. Jessica Hamzelou, senior biotech reporter at MIT Technology Review, has spent the last few days listening to scientists, ethicists, and patient groups wrestle with emotive and ethical dilemmas.
Meet the AI expert who says we should stop using AI so much
Broussard has also recently recovered from breast cancer, and after reading the fine print of her electronic medical records, she realized that an AI had played a part in her diagnosis--something that is increasingly common. That discovery led her to run her own experiment to learn more about how good AI was at cancer diagnostics. We sat down to talk about what she discovered, as well as the problems with the use of technology by police, the limits of "AI fairness," and the solutions she sees for some of the challenges AI is posing. The conversation has been edited for clarity and length. At the beginning of the pandemic, I was diagnosed with breast cancer.
Confronting the Biases Embedded in Artificial Intelligence – The Markup
Hardly a day goes by without another revelation of race, gender, and other biases being embedded in artificial intelligence systems. Just this month, for example, Silicon Valley's much-touted AI image generation system DALL-E disclosed that its system exhibits biases including gender stereotypes and tends "to overrepresent people who are White-passing and Western concepts generally." For instance, it produces images of women for the prompt "a flight attendant" and images of men for the prompt "a builder." In the disclosure, OpenAI, the entity that trained DALL-E, says it is only releasing the program to a limited group of users while it works on mitigating bias and other risks. Meanwhile, researchers using machine learning to examine electronic health records found that Black patients were more than twice as likely to be described in derogatory terms (like "resistant" or "noncompliant") in their patient records. And those are the types of records that often make up the raw material for future AI programs, like the one that aimed to predict patient-reported pain from X-ray data but was only able to make successful predictions for White patients.
- North America > United States > California (0.25)
- North America > United States > Virginia (0.05)
- North America > United States > New York (0.05)