Here is a quick refresher about artificial intelligence. AI refers to intelligence by machines that simulate human intelligence; basically, it's the ability of a device or a program to think and learn. Now let's take a look at how different domains use AI. Let's have a look at the current state of AI. Have you ever thought about what would happen if Artificial intelligent machines try to create music and art?
Keith E. Sonderling is a commissioner for the U.S. Equal Employment Opportunity Commission. Views are the author's own. It's no secret that online advertising is big business. In 2019, digital ad spending in the United States surpassed traditional ad spending for the first time, and by 2023, digital ad spending will all but eclipse it. It's easy to understand why. Digital marketing is now the most effective way for advertisers to reach an enormous segment of the population -- and social media platforms have capitalized on this to the tune of billions of dollars.
Lemonade Insurance sparked outrage this week when it took to Twitter to boast about how it's AI system was able to boost profits by automatically denying claims based on analyzing videos submitted by customers. In a lengthy thread on Twitter Monday afternoon, the company lauded itself for how its AI was able to detect fraud. "When a user files a claim, they record a video on their phone and explain what happened. It can pick up non-verbal cues that traditional insurers can't, since they don't use a digital claims process," the company wrote. "This ultimately helps us lower our loss ratios (aka how much we pay out in claims vs. how much we take in) and our overall operating costs. In Q1 2017, our loss ratio was 368% (friggin' terrible), and in Q1 2021 it stood at 71%," the company added in the now-deleted thread.
"I have nothing to hide" was once the standard response to surveillance programs utilizing cameras, border checks, and casual questioning by law enforcement. Privacy used to be considered a concept generally respected in many countries with a few changes to rules and regulations here and there often made only in the name of the common good. Things have changed, and not for the better. China's Great Firewall, the UK's Snooper's Charter, the US' mass surveillance and bulk data collection -- compliments of the National Security Agency (NSA) and Edward Snowden's whistleblowing -- Russia's insidious election meddling, and countless censorship and communication blackout schemes across the Middle East are all contributing to a global surveillance state in which privacy is a luxury of the few and not a right of the many. As surveillance becomes a common factor of our daily lives, privacy is in danger of no longer being considered an intrinsic right. Everything from our web browsing to mobile devices and the Internet of Things (IoT) products installed in our homes have the potential to erode our privacy and personal security, and you cannot depend on vendors or ever-changing surveillance rules to keep them intact. Having "nothing to hide" doesn't cut it anymore. We must all do whatever we can to safeguard our personal privacy. Taking the steps outlined below can not only give you some sanctuary from spreading surveillance tactics but also help keep you safe from cyberattackers, scam artists, and a new, emerging issue: misinformation. Data is a vague concept and can encompass such a wide range of information that it is worth briefly breaking down different collections before examining how each area is relevant to your privacy and security. A roundup of the best software and apps for Windows and Mac computers, as well as iOS and Android devices, to keep yourself safe from malware and viruses. Known as PII, this can include your name, physical home address, email address, telephone numbers, date of birth, marital status, Social Security numbers (US)/National Insurance numbers (UK), and other information relating to your medical status, family members, employment, and education. All this data, whether lost in different data breaches or stolen piecemeal through phishing campaigns, can provide attackers with enough information to conduct identity theft, take out loans using your name, and potentially compromise online accounts that rely on security questions being answered correctly. In the wrong hands, this information can also prove to be a gold mine for advertisers lacking a moral backbone.
"The big tech is banking heavily on AI, Cloud and 5G technologies to retain customers and drive growth" A global emergency can smother your business, government lawsuits can break your company, competitors with trillion-dollar market value can wipe your organisation off the map. But what would happen when all three come together in the same year? The pandemic brought the world to a standstill. The internet giants, however, came out of it unscathed. Apple, Amazon, Google and Facebook, popularly known as the big four, have not only survived a combination of calamities but registered profits and left the Wall Street analysts dumbfounded.
An institution, be it a body of government, commercial enterprise, or a service, cannot interact directly with a person. Instead, a model is created to represent us. We argue the existence of a new high-fidelity type of person model which we call a digital voodoo doll. We conceptualize it and compare its features with existing models of persons. Digital voodoo dolls are distinguished by existing completely beyond the influence and control of the person they represent. We discuss the ethical issues that such a lack of accountability creates and argue how these concerns can be mitigated.
Companies are increasingly using algorithms to manage and control individuals not by force, but rather by nudging them into desirable behavior -- in other words, learning from their personalized data and altering their choices in some subtle way. Since the Cambridge Analytica Scandal in 2017, for example, it is widely known that the flood of targeted advertising and highly personalized content on Facebook may not only nudge users into buying more products, but also to coax and manipulate them into voting for particular political parties. University of Chicago economist Richard Thaler and Harvard Law School professor Cass Sunstein popularized the term "nudge" in 2008, but due to recent advances in AI and machine learning, algorithmic nudging is much more powerful than its non-algorithmic counterpart. With so much data about workers' behavioral patterns at their fingertips, companies can now develop personalized strategies for changing individuals' decisions and behaviors at large scale. These algorithms can be adjusted in real-time, making the approach even more effective.
You're probably reading this on a browser built by Apple or Google. If you're on a smartphone, it's almost certain those two companies built the operating system. You probably arrived from a link posted on Apple News, Google News or a social media site like Facebook. And when this page loaded, it, like many others on the Internet, connected to one of Amazon's ubiquitous data centers. Amazon, Apple, Facebook and Google -- known as the Big 4 -- now dominate many facets of our lives. But they didn't get there alone. They acquired hundreds of companies over decades to propel them to become some of the most powerful tech behemoths in the world.
Abuse on the Internet is an important societal problem of our time. Millions of Internet users face harassment, racism, personal attacks, and other types of abuse across various platforms. The psychological effects of abuse on individuals can be profound and lasting. Consequently, over the past few years, there has been a substantial research effort towards automated abusive language detection in the field of NLP. In this position paper, we discuss the role that modeling of users and online communities plays in abuse detection. Specifically, we review and analyze the state of the art methods that leverage user or community information to enhance the understanding and detection of abusive language. We then explore the ethical challenges of incorporating user and community information, laying out considerations to guide future research. Finally, we address the topic of explainability in abusive language detection, proposing properties that an explainable method should aim to exhibit. We describe how user and community information can facilitate the realization of these properties and discuss the effective operationalization of explainability in view of the properties.
This week, Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai, and Twitter CEO Jack Dorsey went back to Congress, the first hearing with Big Tech executives since the January 6 insurrection led by white supremacists that directly threatened the lives of lawmakers. The main topic of discussion was the role social media plays in the spread of extremism and disinformation. The end of liability protections granted by Section 230 of the Communications Decency Act (CDA), disinformation, and how tech can harm the mental health of children were discussed, but artificial intelligence took center stage. The word "algorithm" alone was used more than 50 times. Whereas previous hearings involved more exploratory questions and took on a feeling of Geek Squad tech repair meets policy, in this hearing lawmakers asked questions based on evidence and seemed to treat tech CEOs like hostile witnesses.