poor decision
Biden hands China big win with military deal, experts say: 'Incredibly poor decision'
House Armed Services Committee holds a hearing on the Department of Defense using artifical intelligence. President Biden is set to strike a deal with China that would limit the use of artifical intelligence in nuclear weapons. Biden is to meet with Chinese President Xi Jinping on Wednesday at the Asia-Pacific Economic Cooperation (APEC) summit in San Francisco, where the two leaders are expected to also sign an agreement to limit AI's use in military applications, according to a report from Business Insider. According to the report, Biden and Xi will agree to limit AI use in the systems that control and deploy nuclear weapons as well as the technology's use in autonomous weapon systems such as drones. US MILITARY NEEDS AI VEHICLES, WEAPON SYSTEMS TO BE'SUPERIOR' GLOBAL FORCE: EXPERTS President Biden shakes hands with Chinese President Xi Jinping as they meet on the sidelines of the G20 leaders summit in Bali, Indonesia, on Nov. 14, 2022.
- Asia > China (1.00)
- North America > United States > California > San Francisco County > San Francisco (0.25)
- Asia > Indonesia > Bali (0.25)
- (4 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Regional Government > Asia Government > China Government (1.00)
- Government > Military (1.00)
Using today's technologies to create AI safeguards for tomorrow
In a proactive move to "eliminate a future conflict," Elon Musk recently stepped down as chairman of OpenAI, a nonprofit research company he cofounded two years ago aimed at building safe artificial intelligence (AI) with wide benefits for all. In 2014, Musk suggested AI could be "more dangerous than nuclear weapons," a statement he reiterated at a recent SXSW event in Austin, TX. Bill Gates also went on record several years ago about his concerns regarding machine superintelligence. "It is important to research how to reap [AI] benefits while avoiding potential pitfalls." The open letter was accompanied by a research priorities proposal, highlighting work that can be done to make AI "robust and beneficial." Perhaps the most pressing question today is whether we can use current technologies -- such as historical and preventative tracking -- to build AI safeguards that not only figure out why an AI algorithm made a poor decision but also preclude other AI algorithms from making the same poor decision?
- North America > United States > Texas > Travis County > Austin (0.25)
- North America > United States > New Hampshire (0.05)
- North America > United States > Arizona (0.05)
- Information Technology > Security & Privacy (0.51)
- Government (0.50)
- Health & Medicine (0.49)
Why Machine Learning Beginners Shouldn't Avoid the Math
In this post I consider three learning approaches and argue that it could be a bad idea to avoid the mathematics and theory when starting out with machine learning. There are three approaches to starting out in machine learning that I have seen practiced. One is a bottom-up approach, in which the student starts with the mathematics and theory and then puts it into practice in either a high-level programming language -- such as Matlab, Python, R or Octave -- or by coding from scratch in a 3GL like Java, C# or C . The second is the top-down approach, in which machine learning tools and/or libraries are used to shelter the student from the coding, mathematics and theory. S/he is instructed to worry about how it all works later and to instead practice working with datasets.