At present, more African elephants are dying than being born. Over the last century, the world's elephant population has declined 97% from trophy hunters, ruthless ivory mercenaries, and even terrorist groups. The Wildlife Conservation Society has pointed out that the global ivory trade leads to the death of up to 35,000 elephants a year in Africa. It's easy to point a finger at China as the biggest market for poached ivory in the world, yet only five years ago more than a ton of confiscated ivory was crushed in New York's Times Square by the Wildlife Conservation Society.
Financial regulators around the world are cracking down on banks. With Anti-Money Laundering (AML) and Know-Your-Customer (KYC) procedures being put under the microscope, huge fines are being levied against institutions which are found to be in breach. In fact, recent study discovered that over the past ten years, banks across the globe have been slapped with a total of US$26 billion in monetary penalties for AML and sanctions violations. As banks and financial institutions embark on digital transformation initiatives to streamline and simplify the customer onboarding process and reduce risk associated with fraud, many are eyeing the potential of emerging technologies. This enables financial institutions to simplify the process of identifying illicit client relationships, beneficiaries and links to criminal or terrorist activity during the onboarding phase.
If you're laundering money or financing terrorism, what keeps you up at night? Beyond the day-to-day intrigues of a life of crime, you have a money trail to worry about. Sure, you fear the sophisticated law enforcement and intelligence agencies with the power to track you, shut you down, and put you behind bars. But what about the thousands of watchful eyes observing your money as it flows through banks, casinos, real estate and other covert financial conduits? Some of those eyes belong to trained bank employees -- BSA and AML analysts, Financial Crimes Investigators, Compliance Officers, and many others.
Bias in machine learning algorithms has come under scrutiny in countless applications, underscoring how much "silent AI" has permeated so many aspects of everyday life and influenced far-reaching, sea change decisions in business and society. The issue of machine learning and bias has especially intensified with COVID-19 lockdowns and hospital bed count predictions, police and prison reforms, credit ratings, upcoming elections and even the use of language. Facebook CEO Mark Zuckerberg, AI Trends reported, "has promised that machine learning and AI will enable the company to combat the spread of hate speech, terrorist propaganda, and political misinformation across its platforms." First Amendment watchers are taking notice. "Regulating AI is challenging, but industries will need to create standardization around development and certifications, as well as a set of professional standards for ethical AI usage," noted Gartner.
Traditional colonial power seeks unilateral power and domination over colonised people. It declares control of the social, economic, and political sphere by reordering and reinventing the social order in a manner that benefits it. In the age of algorithms, this control and domination occurs not through brute physical force but rather through invisible and nuanced mechanisms such as control of digital ecosystems and infrastructure. Common to both traditional and algorithmic colonialism is the desire to dominate, monitor, and influence the social, political, and cultural discourse through the control of core communication and infrastructure mediums. While traditional colonialism is often spearheaded by political and government forces, digital colonialism is driven by corporate tech monopolies--both of which are in search of wealth accumulation. The line between these forces is fuzzy as they intermesh and depend on one another. Political, economic, and ideological domination in the age of AI takes the form of "technological innovation", "state-of-the-art algorithms", and "AI solutions" to social problems. Algorithmic colonialism, driven by profit maximisation at any cost, assumes that the human soul, behaviour, and action is raw material free for the taking.
The introduction of AI systems in militaries and acquisition by countries, especially great powers, may have grave implications in the long run. A perceived inequity can be seen in the balance of power if the AI race starts to gain momentum. One can argue that the race will be catalysed by the influence of a state acquiring AI systems thus posing a threat to another state all due to the anarchic nature of the international arena. As we grow, or at the very least try to be mindful of the unintended results affiliated with the advancement in military technology and autonomous weapons, we realize the militaristic attraction that these weapons possess and how we cannot sufficiently envision the possibility of these self-sufficient weapons advancing. In 2016, a professional competition took place between a South Korean expert of the strategy game called Go and Google's artificial intelligence program in which the master player Lee Se-Dol announced his withdrawal from the contest after failing to win against the program.
Artificial Intelligence can facilitate crimes and even terrorism in at least 20 different ways in the next 15 years. According to a study by University College London (UCL), in England, the technology can pose a great threat when used, for example, in fraud and smear campaigns. Through AI, experts predict the rise and popularization of deepfakes increasingly indistinguishable from reality. In addition to being complex to detect and prevent, this type of technique could lead to a generalized discrediting of audio and video evidence as a means of understanding an event. Fake videos and audios could, for example, accelerate smear campaigns against public figures, such as political opponents in the midst of elections.
News, views and top stories in your inbox. Don't miss our must-read newsletter It's been dubbed the'Fourth Industrial Revolution', but it seems that the impact of artificial intelligence may need a bigger conversation than many of us realise. Speaking at a briefing in London ahead of the British Science Festival, Professor Jim Al-Khalili, the incoming president of the British Science Association, warned that AI will affect issues such as climate change, and even terrorism . He said: "Until maybe a couple of years ago had I been asked what is the most pressing and important conversation we should be having about our future, I might have said climate change or one of the other big challenges facing humanity, such as terrorism, antimicrobial resistance, the threat of pandemics or world poverty. "But today I am certain the most important conversation we should be having is about the future of AI.
As social media is increasingly being used as people's primary source for news online, there is a rising threat from the spread of malign and false information. With an absence of human editors in news feeds and a growth of artificial online activity, it has become easier for various actors to manipulate the news that people consume. RAND Europe was commissioned by the UK Ministry of Defence's (MOD) Defence and Security Accelerator (DASA) to develop a method for detecting the malign use of information online. The study was contracted as part of DASA's efforts to help the UK MOD develop its behavioural analytics capability. Our study found that online communities are increasingly being exposed to junk news, cyber bullying activity, terrorist propaganda, and political reputation boosting or smearing campaigns.
In this article, we will look at toxic speech detection, the problem of text moderation and understand the different challenges that one might encounter trying to automate the process. We look at several NLP and deep learning approaches to solve the problem and finally implement a toxic speech classifier using BERT embeddings. As of June 2019 there are now over 4.4 billion internet users. According to the latest Domo Data Never Sleeps report, Twitter users send 511,200 tweets per minute. While that happens, TikTok gets banned in Indonesia, Discord sees an increasing number of neo-Nazi posts, tech and film celebrity accounts get hacked so hackers can spurt out several racist slurs and hate speech volumes rise in India on facebook due to the controversial Citizenship Amendment Act (CAA). Social media continues to be used by several to incite violence, spread hate and target minorities based on religion, sex, race and disabilities.