Self-driving cars are one of the high-risk artificial intelligence applications the European Union wants to regulate. The European Commission today unveiled its plan to strictly regulate artificial intelligence (AI), distinguishing itself from more freewheeling approaches to the technology in the United States and China. The commission will draft new laws--including a ban on "black box" AI systems that humans can't interpret--to govern high-risk uses of the technology, such as in medical devices and self-driving cars. Although the regulations would be broader and stricter than any previous EU rules, European Commission President Ursula von der Leyen said at a press conference today announcing the plan that the goal is to promote "trust, not fear." The plan also includes measures to update the European Union's 2018 AI strategy and pump billions into R&D over the next decade.
The U.S. Energy Department earlier this month appointed former 3M Co. artificial-intelligence leader Cheryl Ingstad as the first director of its Artificial Intelligence and Technology Office, where she will oversee the DOE's AI activities. The mission of the AITO, which was formed in September 2019, is to coordinate the department's artificial-intelligence activities, which includes scaling AI projects across the DOE, sharing best practices and reducing duplicate projects. The office also is charged with facilitating partnerships...
News Top Stories'Staying True to the Medicaid Promise' Top Stories SUNY Polytechnic Institute artificial intelligence project Top Stories Little Falls Woman Pleads Guilty To Bank Fraud Utica Police ID Second Person In Murder–Attempted Suicide Traffic violations in NYS discovery law Video Whitesboro Flood Meeting Thursday Night Video Weather Bank of Utica EYEnet Sports Top Stories Yankees' Stanton Injured, Unlikely for Opening Day Video Comets Beat Americans in Shootout, Rafferty Breaks Record Video Oneida Girls Basketball Dominates Against Lowville in Quarterfinals – Highlights Video Colgate Women's and Men's Basketball Fall to Bucknell Video Community Remarkable Women Full Interviews Daily Pledge of Allegiance Contests Rain – Beatles Tribute Contest About Us Do Not Sell My Personal Information Rep. Brindisi announces re-election campaign US women's pursuit takes gold at track cycling championships Ohio State star Chase Young follows in Nick Bosa's footsteps
The Defense Department has officially adopted a set of principles to ensure ethical artificial intelligence adoption, but much work is needed on the implementation front, senior DOD tech officials told reporters Feb. 24. The five principles [see sidebar], which are based on the recommendations of the Defense Innovation Board's 15-month study on the matter, represent a first step and generalized intentions around AI use and adoption including being responsible, equitable, traceable, reliable, and governable. DOD released the principles during a news briefing Feb. 24. Those AI ethical guidelines will likely be woven into a little bit of everything, like cyber, from data collection to testing, DOD CIO Dana Deasy told reporters. "We need to be very thoughtful about where that data is coming from, what was the genesis of that data, how was that data previously being used and you can end up in a state of [unintentional] bias and therefore create an algorithmic outcome that is different than what you're actually intending," Deasy said.
The Centers for Disease Control and Prevention (CDC) is using data from platforms like Reddit and Twitter to power artificial intelligence that can forecast suicide rates. The agency is doing this because its current suicide statistics are delayed by up to two years, which means that officials are forming policy and allocating mental health resources throughout the country without the most up-to-date numbers. The CDC's suicide rate statistics are calculated based on cause-of-death reports from throughout the 50 states, which are compiled into a national database. That information is the most accurate reporting we have, but it can take a long time to produce. "If we want to do any kind of policy change, intervention, budget allocation, we need to know the real picture of what is going on in the world in terms of people's mental health experiences," Munmun de Choudhury, a professor at Georgia Tech's School of Interactive Computing who is working with the CDC, told Recode.
More than 1,000 students are vying for places at the world's first dedicated artificial intelligence university in Abu Dhabi. The Mohamed bin Zayed University of Artificial Intelligence will swing open its doors in August, with demand high from those eager to be part of the inaugural class of 2020. The graduate-level institute revealed the bumper number of applicants are currently being put through a stringent vetting process ahead of the landmark opening term. Masters and PhD courses will be held at the forward-thinking seat of learning, which has cast the net far and wide across the globe in search of top talent. World's first artificial intelligence university to open in Abu Dhabi Artificial intelligence isn't coming to the UAE - it is already here During the university's first advisory board meeting, Dr Sultan Al Jaber, Minister of State, said the first wave of students would be at the forefront of a new era of innovation in the country.
Facial recognition software provider Clearview AI has revealed that its entire client list was stolen by someone who'gained unauthorized access' to company documents and data. According to a notice sent to its customers, Cleaview AI said that in addition to its client list, the intruder had gained access to the number of user accounts associated with each client, as well as the number of searches conducted through those accounts. The company didn't specify how the security breach had occurred nor who might have been responsible, and it claimed its servers and internal network hadn't been compromised. Facial recognition software company Clearview AI has revealed a security breach that exposed it's client list and number of searches those clients made'Unfortunately, data breaches are part of life in the 21st century,' Clearview attorney Tor Ekeland told The Daily Beast, who broke the story. 'Our servers were never accessed.
The U.S. Department of Defense officially adopted a series of ethical principles for the use of Artificial Intelligence today following recommendations provided to Secretary of Defense Dr. Mark T. Esper by the Defense Innovation Board last October. The recommendations came after 15 months of consultation with leading AI experts in commercial industry, government, academia and the American public that resulted in a rigorous process of feedback and analysis among the nation's leading AI experts with multiple venues for public input and comment. "The United States, together with our allies and partners, must accelerate the adoption of AI and lead in its national security applications to maintain our strategic position, prevail on future battlefields, and safeguard the rules-based international order," said Secretary Esper. "AI technology will change much about the battlefield of the future, but nothing will change America's steadfast commitment to responsible and lawful behavior. The adoption of AI ethical principles will enhance the department's commitment to upholding the highest ethical standards as outlined in the DOD AI Strategy, while embracing the U.S. military's strong history of applying rigorous testing and fielding standards for technology innovations."
Sci-fi movies have created an impact on our minds that using robots in our life is a very bad idea. From The Terminator to The Matrix, almost every Hollywood movie shows that robots took control over humanity. Even RUR, the 1920s Karel Capek play introduced the term "robot,". Despite the cinematic warnings robots have moved from fiction stories to an important piece of modern world arsenal. Now the developed world is also debating on the point to use develop killer robots and machine to save human life.
Machine learning is the latest to make waves in the field of Information Security, and for good reason. The support of complex algorithms that'learn' and grow is invaluable to human analysts, allowing them to focus on larger tactical fights and strengthen security systems to be virtually bulletproof. In both routine and structural changes to Information Security, machine learning plays an increasingly important role and will continue to do so, leading into the coming years. What is Information Security (InfoSec)? InfoSec refers to the systems, tools and processes that are designed and then deployed to field sensitive and confidential data from being compromised or tampered with.