Goto

Collaborating Authors

Results


How to Prevent AI Dangers With Ethical AI

#artificialintelligence

After widespread protests against racism in the U.S., tech giants Microsoft, Amazon and IBM publicly announced they would no longer allow police departments access to their facial recognition technology. Artificial intelligence (AI) can be prone to errors, particularly in recognizing people of color and those in other underrepresented groups. Any organization developing or using AI solutions needs to be proactive in ensuring that AI dangers don't jeopardize their brand, draw regulatory actions, lead to boycotts or destroy business value. Microsoft President Brad Smith was widely quoted as saying his company wouldn't sell facial-recognition technology to police departments in the U.S., "until we have a national law in place, grounded in human rights, that will govern this technology." So, in the absence of highly rigorous institutional protections against AI dangers, what can organizations do themselves to guard against them?


Regulating AI – is the current legislation capable of dealing with AI? -- FCAI

#artificialintelligence

How law regulates Artificial Intelligence (AI)? How do we ensure AI applications comply with existing legal rules and principles? Is new regulation needed and if yes, what type of regulation? These questions have gained increasing importance as AI deployment has increased across various sectors in our societies. Adopting new technological solutions has raised legislators' concern for the protection of fundamental rights both nationally in Finland and at the EU level. However, finding these answers is not easy. And the answers we find may be frustrating: varying from typical "it depends" to the self-evident "it's complicated", followed by the slightly more optimistic "we don't know yet".


Why Your Board Needs a Plan for AI Oversight

#artificialintelligence

We can safely defer the discussion about whether artificial intelligence will eventually take over board functions. We cannot, however, defer the discussion about how boards will oversee AI -- a discussion that's relevant whether organizations are developing AI systems or buying AI-powered software. With the technology in increasingly widespread use, it's time for every board to develop a proactive approach for overseeing how AI operates within the context of an organization's overall mission and risk management. According to McKinsey's 2019 global AI survey, although AI adoption is increasing rapidly, overseeing and mitigating its risks remain unresolved and urgent tasks: Just 41% of respondents said that their organizations "comprehensively identify and prioritize" the risks associated with AI deployment. Get monthly email updates on how artificial intelligence and big data are affecting the development and execution of strategy in organizations. Board members recognize that this task is on their agendas: According to the 2019 National Association of Corporate Directors (NACD) Blue Ribbon Commission report, Fit for the Future: An Urgent Imperative for Board Leadership, 86% of board members "fully expect to deepen their engagement with management on new drivers of growth and risk in the next five years."1


The impact of AI on business and society

#artificialintelligence

Artificial intelligence, or AI, has long been the object of excitement and fear. In July, the Financial Times Future Forum think-tank convened a panel of experts to discuss the realities of AI -- what it can and cannot do, and what it may mean for the future. Entitled "The Impact of Artificial Intelligence on Business and Society", the event, hosted by John Thornhill, the innovation editor of the FT, featured Kriti Sharma, founder of AI for Good UK, Michael Wooldridge, professor of computer sciences at Oxford university, and Vivienne Ming, co-founder of Socos Labs. For the purposes of the discussion, AI was defined as "any machine that does things a brain can do". Intelligent machines under that definition still have many limitations: we are a long way from the sophisticated cyborgs depicted in the Terminator films. Such machines are not yet self-aware and they cannot understand context, especially in language. Operationally, too, they are limited by the historical data from which they learn, and restricted to functioning within set parameters. Rose Luckin, professor at University College London Knowledge Lab and author of Machine Learning and Human Intelligence, points out that AlphaGo, the computer that beat a professional (human) player of Go, the board game, cannot diagnose cancer or drive a car.


Impact of Artificial Intelligence in Cybersecurity

#artificialintelligence

Even though security solutions are becoming modern and robust, cyber threats are ever-evolving and always on the peak. The main reason for this is because the conventional methods to detect the malware are falling apart. Cybercriminals are regularly coming up with smarter ways to bypass the security programs and infect the network and systems with different kinds of malware. The thing is, currently, most antimalware or antivirus programs use the signature-based detection technique to catch the threats, which is ineffective in detecting the new threats. This is where Artificial Intelligence can come to rescue.


A Practical Guide to Building Ethical AI

#artificialintelligence

Companies are leveraging data and artificial intelligence to create scalable solutions -- but they're also scaling their reputational, regulatory, and legal risks. For instance, Los Angeles is suing IBM for allegedly misappropriating data it collected with its ubiquitous weather app. Optum is being investigated by regulators for creating an algorithm that allegedly recommended that doctors and nurses pay more attention to white patients than to sicker black patients. Goldman Sachs is being investigated by regulators for using an AI algorithm that allegedly discriminated against women by granting larger credit limits to men than women on their Apple cards. Facebook infamously granted Cambridge Analytica, a political firm, access to the personal data of more than 50 million users.


The impact of AI on business and society

#artificialintelligence

Artificial intelligence, or AI, has long been the object of excitement and fear. In July, the Financial Times Future Forum think-tank convened a panel of experts to discuss the realities of AI -- what it can and cannot do, and what it may mean for the future. Entitled "The Impact of Artificial Intelligence on Business and Society", the event, hosted by John Thornhill, the innovation editor of the FT, featured Kriti Sharma, founder of AI for Good UK, Michael Wooldridge, professor of computer sciences at Oxford university, and Vivienne Ming, co-founder of Socos Labs. For the purposes of the discussion, AI was defined as "any machine that does things a brain can do". Intelligent machines under that definition still have many limitations: we are a long way from the sophisticated cyborgs depicted in the Terminator films. Such machines are not yet self-aware and they cannot understand context, especially in language. Operationally, too, they are limited by the historical data from which they learn, and restricted to functioning within set parameters. Rose Luckin, professor at University College London Knowledge Lab and author of Machine Learning and Human Intelligence, points out that AlphaGo, the computer that beat a professional (human) player of Go, the board game, cannot diagnose cancer or drive a car.


'Machines set loose to slaughter': the dangerous rise of military AI

#artificialintelligence

Two menacing men stand next to a white van in a field, holding remote controls. They open the van's back doors, and the whining sound of quadcopter drones crescendos. They flip a switch, and the drones swarm out like bats from a cave. In a few seconds, we cut to a college classroom. The students scream in terror, trapped inside, as the drones attack with deadly force. The lesson that the film, Slaughterbots, is trying to impart is clear: tiny killer robots are either here or a small technological advance away. And existing defences are weak or nonexistent.


Disassembly Required -- Real Life

#artificialintelligence

HitchBot, a friendly-looking talking robot with a bucket for a body and pool-noodle limbs, first arrived on American soil back in 2015. This "hitchhiking" robot was an experiment by a pair of Canadian researchers who wanted to investigate people's trust in, and attitude towards, technology. The researchers wanted to see "whether a robot could hitchhike across the country, relying only on the goodwill and help of strangers." With rudimentary computer vision and a limited vocabulary but no independent means of locomotion, HitchBot was fully dependent on the participation of willing passers-by to get from place to place. Fresh off its successful journey across Canada, where it also picked up a fervent social media following, HitchBot was dropped off in Massachusetts and struck out towards California. But HitchBot never made it to the Golden State.


Artificial Intelligence (AI) ethics: 5 questions CIOs should ask

#artificialintelligence

You may not realize it, but artificial intelligence (AI) is already enhancing our lives in a multitude of ways. AI systems already man our call centers, drive our cars, and take orders through kiosks at local fast food restaurants. In the days ahead, AI and machine learning will become a more prominent fixture, disrupting industries and extracting tediousness from our everyday lives. As we hand over larger chunks of our lives to the machines, we need to lift the hood to see what kind of ethics are driving them, and who is defining the rules of the road. Many CIOs have begun experimenting with AI in areas that may not be very visible to end users, such as automating warehouses.