industry leader
Biden looks to limit AI product exports, tech leaders say they'll lose global market share
Leaders in the tech industry are urging the Biden administration not to add a new regulation that will limit artificial intelligence exports, citing concerns it is overbroad and could diminish the United States' global dominance in AI. The new rule, which industry leaders say could come as early as the end of this week, effectively seeks to shore up the U.S. economy and national security efforts by adding new restrictions on how many U.S.-made artifical intelligence products can be deployed across the globe. "A rule of this nature would cede the global market to U.S. competitors who will be eager to fill the untapped demand created by placing arbitrary constraints on U.S. companies' ability to sell basic computing systems overseas," stated a Monday letter from Jason Oxman, the president and CEO of the Information Technology Industry Council (ITI), sent to Commerce Department Secretary Gina Raimondo. "Should the U.S. lose its advantage in the global AI ecosystem, it will be difficult, if not impossible, to regain in the future." FBI'S NEW WARNING ABOUT AI-DRIVEN SCAMS THAT ARE AFTER YOUR CASH The process to place new export controls on artificial intelligence goes back to October 2022, when the Biden administration's Commerce Department first released an updated export framework aimed at slowing the progress of Chinese military programs. Details of the new incoming export controls surfaced after the Biden administration called on American tech company NVIDIA to stop selling certain computer chips to China the following month.
- Asia > China (0.63)
- North America > United States > North Dakota (0.05)
- Europe > Russia (0.05)
- Asia > Russia (0.05)
AI Should Complement Humans at Work, Not Replace Them, TIME Panelists Say
Artificial intelligence is widely expected to transform our lives. Leaders from across the sector gathered for a TIME dinner conversation on Nov. 30, where they emphasized the need to center humans in decisions around incorporating the technology into workflows and advocated for governments and industry leaders to take a responsible approach to managing the risks the technology poses. As part of the TIME100 Talks series in San Francisco, senior correspondent Alice Park spoke with panelists Cynthia Breazeal, a pioneer in social robotics and the Dean for Digital Learning at MIT, James Landay, a computer science professor and vice director of the Institute for Human-Centered AI at Stanford University, and Raquel Urtasun, CEO and founder of self-driving tech startup Waabi, which recently put a fleet of trucks into service on Uber Freight's trucking network. The panelists discussed the ethical considerations of AI and the ways in which leaders can ensure its benefits reach every corner of the world. During the discussion, the three panelists highlighted the transformative journey of AI and delved into its profound implications, emphasizing the need for responsible AI deployment.
- Government (0.99)
- Law > Statutes (0.49)
- Transportation > Ground > Road (0.31)
Self-proclaimed AI savior Elon Musk will launch his own artificial intelligence TOMORROW - as he tries to avoid tech destroying humanity
Elon Musk is set to roll out the first model of his AI-powered system, xAI, on Saturday, one day after he proclaimed the tech is the biggest risk to humanity. The billionaire said Friday that he is opening up early access to a select group, but details of who has not been shared. 'In some important respects, it (xAI's new model) is the best that currently exists,' the Tesla CEO said on Friday. Musk, who has been critical of Big Tech's AI efforts and censorship, said earlier this year that he would launch a maximum truth-seeking AI that tries to understand the nature of the universe to rival Google's Bard and Microsoft's Bing AI. Elon Musk is set to roll out the first model of his AI-powered system, xAI, on Saturday, which he claims could help avoid humanity's destruction at the hands of the tech Musk revealed his startup on July 12, 2023 by launching a dedicated X account for the AI company and spares website.
- North America > United States > Nevada (0.06)
- Europe > United Kingdom > England > Buckinghamshire > Milton Keynes (0.06)
The Morning After: Industry leaders say AI presents 'risk of extinction' on par with nuclear war
With the rise of AI language models and tools like ChatGPT and Bard, we've heard warnings from people involved, like Elon Musk, about the risks posed by AI. Now, a group of high-profile industry leaders has issued a one-sentence statement: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." It was posted to the Center for AI Safety, an organization with the mission "to reduce societal-scale risks from artificial intelligence," according to its website. Signatories include OpenAI chief executive Sam Altman and Google DeepMind head Demis Hassabis. Turing Award-winning researchers Geoffrey Hinton and Yoshua Bengio, the godfathers of modern AI, also put their names to it.
- Government > Military (0.61)
- Leisure & Entertainment > Games > Computer Games (0.32)
AI presents 'risk of extinction' on par with nuclear war, industry leaders say
With the rise of ChatGPT, Bard and other large language models (LLMs), we've been hearing warnings from the people involved like Elon Musk about the risks posed by artificial intelligence (AI). Now, a group of high-profile industry leaders has issued a one-sentence statement effectively confirming those fears. Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. It was posted to the Center for AI Safety, an organization with the mission "to reduce societal-scale risks from artificial intelligence," according to its website. Signatories are a who's who of the AI industry, including OpenAI chief executive Sam Altman and Google DeepMind head Demis Hassabis.
Congress warns AI could reshape 'human history' as ChatGPT inventor Sam Altman testifies
OpenAI CEO Sam Altman is speaking in front of Congress about the dangers of AI after his company's ChatGPT exploded in popularity in the past few months. Lawmakers are grilling the CEO, stressing that ChatGPT and other models could shape'human history' like the printing press or the atomic bomb. The printing press, according to officials, brought liberty to the American people, while the atomic bomb left behind haunting consequences. Altman told senators that generative AI could be a'printing press moment,' but he is not blind to its fault, noting policymakers and industry leaders need to work together to'make it so.' Tuesday's hearing is the first of a series intended to write rules for AI, which lawmakers said should have been done with the birth of social media.
- Asia > Russia (0.15)
- North America > United States > California > San Francisco County > San Francisco (0.06)
- North America > United States > Connecticut (0.05)
- Europe > Ukraine (0.05)
- Government > Military (0.71)
- Government > Regional Government (0.50)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.59)
Kagan: Can Qualcomm succeed in AI, Chatbot, ChatGPT, Bard space? - RCR Wireless News
Qualcomm is one of America's leading players in the wireless space. That being said, they are also wrestling with several weak links in their otherwise strong chain. Some of their key wireless sectors like chip sets and smartphones have weakened. I believe that is why Qualcomm is trying to refocus their efforts on new segments for growth to keep investors excited. That's why when we pull the camera back, we see Qualcomm searching for new areas of growth in recent years.
- Telecommunications (1.00)
- Semiconductors & Electronics (1.00)
Latest QA Trends That You Should Be Aware Of – QA Valley
The current Quality Assurance market is not static, it's changing rapidly. In order to adapt and adjust to these transformations and stay competitive, businesses need to be aware of and follow the latest industry trends. They can help your company meet your business demands and build connected, scalable, intelligent and fast digital solutions for your clients. Those who are able to predict the future of QA will become industry leaders. Thus, we have decided to take a look at the main QA trends that will shape the future over the next few years.
9 IoT Trends To Follow in 2023
Billions of devices are connected to the internet. By the end of 2019 there were around 3.6 billion devices that are actively connected to the Internet and used for daily tasks. With the introduction of 5G that will open the door for more devices, and data traffic. You can add to this trend the increased adoption of edge computing which will make it easier for businesses to process data faster and close to the points of action. Making the most of data, and even understanding on a basic level how modern infrastructure functions, requires computer assistance through artificial intelligence.
The 9 Best Podcasts for AI, ML, and Data Science Professionals
Artificial intelligence, machine learning, and data science are some of today's most popular tech fields. Becoming a professional in any of them can be your ticket to working on some of the greatest innovations that will power the world for decades to come. Of course, you must keep up with the latest developments in the industry to stay ahead. That's where podcasts come in. The best data science, machine learning, and AI podcasts can teach you a lot about these topics and areas where you can apply them.