Results


Apple Electronics: Inside the Beatles' eccentric technology subsidiary

Daily Mail - Science & tech

Say the word Apple today and we think of Steve Jobs' multi-billion-dollar technology company that spawned the iPhone and the Mac computer. But a decade before the California-based firm was even founded, Apple Electronics, a subsidiary of the Beatles' record label Apple, was working on several pioneering inventions – some of which were precursors of commonly available products today. Apple Electronics was led by Alexis Mardas, a young electronics engineer and inventor originally from Athens in Greece, known to the Beatles as Magic Alex. He died on this day in 2017, aged 74, and was one of the most colourful and mysterious characters in the Beatles' story. Dressed in a white lab coat in his London workshop, Mardas created prototypes of inventions that were set to be marketed and sold. These included the'composing typewriter' – powered by an early example of sound recognition – and a phone with advanced memory capacity.


Google Workers Launch Union To Press Grievances With Executives

NPR Technology

More than 200 engineers and other workers have formed a union at Google, a breakthrough in labor organizing in Silicon Valley where workers have clashed with executives over workplace culture, diversity and ethics. Across half a dozen Google offices in the U.S. and Canada, 226 workers signed cards to form the Alphabet Workers Union, the group said on Monday. They are supported by the Communications Workers of America, which represents workers in telecommunications and media. The new union won't have collective bargaining rights and represents only a small fraction of Google's workforce. Google, which is owned by Alphabet Inc., has faced employee outcry over issues including sexual harassment, its work with the Pentagon and the company's treatment of its massive contract workforce.


The privacy wins worth celebrating in an otherwise dreary 2020

Mashable

Let's talk about the good things that happened this year. Yes, 2020 has been a relentless nightmare that's unspooled at rapidly shifting speeds -- and it's showing no signs of magically abating as the clock strikes 12 this New Year's Eve. But you, who by some combination of luck or fate are still thinking and breathing, know this already. What you may be less aware of, however, is that despite the undeniable pain and tragedy 2020 has wrought, there are developments worth celebrating. While each passing year seemingly brings with it news of further digital indignities thrust upon your life, 2020 witnessed genuine progress when it comes to protecting your privacy.


Digital Instruments as Invention Machines

Communications of the ACM

The history of invention is a history of knowledge spillovers. There is persistent evidence of knowledge flowing from one firm, industry, sector or region to another, either by accident or by design, enabling other inventions to be developed.1,6,9,13 For example, Thomas Edison's invention of the "electronic indicator" (US patent 307,031: 1884) spurred the development by John Fleming and Lee De Forest in early 20th century of early vacuum tubes which eventually enabled not just long-distance telecommunication but also early computers (for example, Guarnier10). Edison, in turn, learned from his contemporaries including Frederick Guthrie.11 It appears that little of this mutual learning and knowledge exchange was paid for and can thus be called a "spillover," that is, an unintended flow of valuable knowledge, an example of a positive externality. Information technologies have been a major source of knowledge spillovers.a Information is a basic ingredient of invention, and technologies that facilitate the manipulation and communication of information should also facilitate invention.


We saw the future in 2020 and the future sucks

Mashable

Flying cars are starting to look like a crock of shit. I contend we're living in the future, and -- spoiler ahead -- flying cars aren't the future we got. Listen, I hate this gut feeling as much you probably do, but I can't quite shake it: 2020 looks a whole hell of a lot like the future. We lived through screens -- at least, you did if you were fortunate and caring -- and limited our human interaction to a bare minimum. Hours upon hours poured into television or immersive video game worlds. It all reminds me of a piece my friend Mike Murphy wrote for Quartz in 2016 titled, "The future is a place where we won't have to talk to or hear from anyone we don't want to."


From whistleblower laws to unions: How Google's AI ethics meltdown could shape policy

#artificialintelligence

It's been two weeks since Google fired Timnit Gebru, a decision that still seems incomprehensible. Gebru is one of the most highly regarded AI ethics researchers in the world, a pioneer whose work has highlighted the ways tech fails marginalized communities when it comes to facial recognition and more recently large language models. Of course, this incident didn't happen in a vacuum. Case in point: Gebru was fired the same day the National Labor Review Board (NLRB) filed a complaint against Google for illegally spying on employees and the retaliatory firing of employees interested in unionizing. Gebru's dismissal also calls into question issues of corporate influence in research, demonstrates the shortcomings of self-regulation, and highlights the poor treatment of Black people and women in tech in a year when Black Lives Matter sparked the largest protest movement in U.S. history. In an interview with VentureBeat last week, Gebru called the way she was fired disrespectful and described a companywide memo sent by CEO Sundar Pichai as "dehumanizing." To delve further into possible outcomes following Google's AI ethics meltdown, VentureBeat spoke with five experts in the field about Gebru's dismissal and the issues it raises.


Open Problems in Cooperative AI

arXiv.org Artificial Intelligence

Problems of cooperation--in which agents seek ways to jointly improve their welfare--are ubiquitous and important. They can be found at scales ranging from our daily routines--such as driving on highways, scheduling meetings, and working collaboratively--to our global challenges--such as peace, commerce, and pandemic preparedness. Arguably, the success of the human species is rooted in our ability to cooperate. Since machines powered by artificial intelligence are playing an ever greater role in our lives, it will be important to equip them with the capabilities necessary to cooperate and to foster cooperation. We see an opportunity for the field of artificial intelligence to explicitly focus effort on this class of problems, which we term Cooperative AI. The objective of this research would be to study the many aspects of the problems of cooperation and to innovate in AI to contribute to solving these problems. Central goals include building machine agents with the capabilities needed for cooperation, building tools to foster cooperation in populations of (machine and/or human) agents, and otherwise conducting AI research for insight relevant to problems of cooperation. This research integrates ongoing work on multi-agent systems, game theory and social choice, human-machine interaction and alignment, natural-language processing, and the construction of social tools and platforms. However, Cooperative AI is not the union of these existing areas, but rather an independent bet about the productivity of specific kinds of conversations that involve these and other areas. We see opportunity to more explicitly focus on the problem of cooperation, to construct unified theory and vocabulary, and to build bridges with adjacent communities working on cooperation, including in the natural, social, and behavioural sciences.


Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges

arXiv.org Artificial Intelligence

As we make tremendous advances in machine learning and artificial intelligence technosciences, there is a renewed understanding in the AI community that we must ensure that humans being are at the center of our deliberations so that we don't end in technology-induced dystopias. As strongly argued by Green in his book Smart Enough City, the incorporation of technology in city environs does not automatically translate into prosperity, wellbeing, urban livability, or social justice. There is a great need to deliberate on the future of the cities worth living and designing. There are philosophical and ethical questions involved along with various challenges that relate to the security, safety, and interpretability of AI algorithms that will form the technological bedrock of future cities. Several research institutes on human centered AI have been established at top international universities. Globally there are calls for technology to be made more humane and human-compatible. For example, Stuart Russell has a book called Human Compatible AI. The Center for Humane Technology advocates for regulators and technology companies to avoid business models and product features that contribute to social problems such as extremism, polarization, misinformation, and Internet addiction. In this paper, we analyze and explore key challenges including security, robustness, interpretability, and ethical challenges to a successful deployment of AI or ML in human-centric applications, with a particular emphasis on the convergence of these challenges. We provide a detailed review of existing literature on these key challenges and analyze how one of these challenges may lead to others or help in solving other challenges. The paper also advises on the current limitations, pitfalls, and future directions of research in these domains, and how it can fill the current gaps and lead to better solutions.


The Emerging Threats of Deepfake Attacks and Countermeasures

arXiv.org Artificial Intelligence

Deepfake technology (DT) has taken a new level of sophistication. Cybercriminals now can manipulate sounds, images, and videos to defraud and misinform individuals and businesses. This represents a growing threat to international institutions and individuals which needs to be addressed. This paper provides an overview of deepfakes, their benefits to society, and how DT works. Highlights the threats that are presented by deepfakes to businesses, politics, and judicial systems worldwide. Additionally, the paper will explore potential solutions to deepfakes and conclude with future research direction.


Why addressing bias in AI algorithms matters (Includes interview)

#artificialintelligence

To gain an insight into these and other essential 2021 trends for businesses, Digital Journal caught up with Robert Prigge, CEO of Jumio. Addressing bias in AI algorithms will be a top priority causing guidelines to be rolled out for machine learning support of ethnicity for facial recognition. Prigge explains: "Enterprises are becoming increasingly concerned about demographic bias in AI algorithms (race, age, gender) and its effect on their brand and potential to raise legal issues. Evaluating how vendors address demographic bias will become a top priority when selecting identity proofing solutions in 2021." Prigge adds: "According to Gartner, more than 95 percent of RFPs for document-centric identity proofing (comparing a government-issued ID to a selfie) will contain clear requirements regarding minimizing demographic bias by 2022, an increase from fewer than 15 percent today. Organizations will increasingly need to have clear answers to organizations who want to know how a vendor's AI "black box" was built, where the data originated from and how representative the training data is to the broader population being served."