The significance of artificial intelligence and machine learning (AIML) has increased by much in technology in recent years. It has gone to a point where they are helping businesses gain an advantage over their competitors. With the ever-increasing volumes of data generated each day, it becomes essential to process it in real-time. This is where AIML comes into the picture as the technology can help process and analyze volumes of data within minutes. The relevance of IoT devices, too, has been on the rise.
To help its service technicians more efficiently repair and maintain its models, Mercedes-Benz USA is outfitting all of its authorized American dealerships with HoloLens 2 headsets. The devices are equipped with Microsoft Dynamics 365 Remote Assist, a mixed reality app that that lets users collaborate during hands-free video calls from their own computers. Organizations have long known the importance of business resiliency, but becoming resilient requires time and preparation, and the pandemic has forced many organizations to evolve at a pace few could have imagined. To recover and thrive within this new context presents new challenges. That is why we are partnering with customers to support faster adoption of digital capabilities.
Modern day enterprise security is like guarding a fortress that is being attacked on all fronts, from digital infrastructure to applications to network endpoints. That complexity is why AI technologies such as deep learning and machine learning have emerged as game-changing defensive weapons in the enterprise's arsenal over the past three years. There is no other technology that can keep up. It has the ability to rapidly analyze billions of data points, and glean patterns to help a company act intelligently and instantaneously to neutralize many potential threats. Beginning about five years ago, investors started pumping hundreds of millions of dollars into a wave of new security startups that leverage AI, including CrowdStrike, Darktrace, Vectra AI, and Vade Secure, among others.
In September 2019, four researchers wrote to the publisher Wiley to "respectfully ask" that it immediately retract a scientific paper. The study, published in 2018, had trained algorithms to distinguish faces of Uyghur people, a predominantly Muslim minority ethnic group in China, from those of Korean and Tibetan ethnicity1. China had already been internationally condemned for its heavy surveillance and mass detentions of Uyghurs in camps in the northwestern province of Xinjiang -- which the government says are re-education centres aimed at quelling a terrorist movement. According to media reports, authorities in Xinjiang have used surveillance cameras equipped with software attuned to Uyghur faces. As a result, many researchers found it disturbing that academics had tried to build such algorithms -- and that a US journal had published a research paper on the topic. And the 2018 study wasn't the only one: journals from publishers including Springer Nature, Elsevier and the Institute of Electrical and Electronics Engineers (IEEE) had also published peer-reviewed papers that describe using facial recognition to identify Uyghurs and members of other Chinese minority groups. The complaint, which launched an ongoing investigation, was one foray in a growing push by some scientists and human-rights activists to get the scientific community to take a firmer stance against unethical facial-recognition research.
Travelers who wander the banana pancake trail through Southeast Asia will all get roughly the same experience. They'll eat crummy food on one of fifty boats floating around Halong Bay, then head up to the highlands of Sapa for a faux cultural experience with hill tribes that grow dreadful cannabis. After that, it's on to Laos to float the river in Vang Vien while smashed on opium tea. Eventually, you'll see someone wearing a t-shirt with the classic slogan – "same same, but different." The origins of this phrase surround the Southeast Asian vendors who often respond to queries about the authenticity of fake goods they're selling with "same same, but different." It's a phrase that appropriately describes how the technology world loves to spin things as fresh and new when they've hardly changed at all.
Travelers who wander the banana pancake trail through Southeast Asia will all get roughly the same experience. They'll eat crummy food on one of fifty boats floating around Ha Long Bay, then head up to the highlands of Sa Pa for a faux cultural experience with hill tribes that grow dreadful cannabis. After that, it's on to Laos to float the river in Vang Vieng while smashed on opium tea. Eventually, you'll see someone wearing a t-shirt with the classic slogan – "same same, but different." The origins of this phrase surround the Southeast Asian vendors who often respond to queries about the authenticity of fake goods they're selling with "same same, but different." It's a phrase that appropriately describes how the technology world loves to spin things as fresh and new when they've hardly changed at all.
Artificial intelligence (AI) applications have attracted considerable ethical attention for good reasons. Although AI models might advance human welfare in unprecedented ways, progress will not occur without substantial risks. This article considers 3 such risks: system malfunctions, privacy protections, and consent to data repurposing. To meet these challenges, traditional risk managers will likely need to collaborate intensively with computer scientists, bioinformaticists, information technologists, and data privacy and security experts. This essay will speculate on the degree to which these AI risks might be embraced or dismissed by risk management.
Apple may be stealthily developing its own search engine, as Google faces a lawsuit from the U.S. antitrust authorities regarding the search engine giant's agreements with companies to be the default search tool. In the newest operating system update for the iPhone, the iOS 14, Apple has started showing its own search results and direct links to websites when users search from their home screen. In its updated version, iOS 14 does not use Google for many of its search functions, as it previously used to. The search window that appears in iPhones when users swipe right now compiles Apple-generated search suggestions rather than Google results. Earlier this week, the U.S. Department of Justice, in a landmark lawsuit said, Google is monopolizing the search space by entering into multi-billion dollar deals with mobile companies like Apple, Motorola, and network carriers like AT&T and Verizon, to be the default search engine on devices.
Google has had an eventful couple of weeks, announcing enhancements to its search and map capabilities at its virtual "Search On" event on Oct. 15, and on Oct. 20 being accused by the US Justice Department of engaging in anti-competitive practices in order to preserve its search engine business. At the Search On event, Google detailed how it has tapped AI and machine learning techniques to make improvements to Google Maps as well as Search. In an expansion of its search "busyness metrics," users will be able to see how busy locations are without identifying the specific beach, grocery store, pharmacy or other location. COVID-19 safety information will also be added to business profiles across Search and Maps, indicating whether the business is using safety precautions such as temperature checks or plexiglass shields, according to an account in VentureBeat. An improvement to the algorithm beneath the "Did you mean?" features of search, will enable more accurate and precise spelling suggestions.
OneZero is partnering with the Big Technology Podcast from Alex Kantrowitz to bring readers exclusive access to interview transcripts with notable figures in and around the tech industry. This week, Kantrowitz sits down with Meredith Whittaker, an A.I. researcher who helped lead Google's employee walkout in 2018. This interview, which took place at World Summit A.I, has been edited for length and clarity. To subscribe to the podcast and hear the interview for yourself, you can check it out on Apple Podcasts, Spotify, and Overcast. When I interviewed Tristan Harris about The Social Dilemma earlier this month, my mentions filled with people saying, "You should speak to the people who were critical of the social web long before the film." One name stood out: Meredith Whittaker. An A.I. researcher and former Big Tech employee, Whittaker helped lead Google's walkout in 2018 amid a season of activism inside the company. On this edition of the Big Technology Podcast, we spoke not only about her views on the film, but also of the future of workplace activism inside tech companies in a moment where some are questioning if it belongs at all. Alex Kantrowitz: It seems like your perspective on The Social Dilemma is a little bit different from Tristan's.