Goto

Collaborating Authors

Results


Nightfall nabs cash for AI that detects sensitive data across apps – TechCrunch

#artificialintelligence

Nightfall AI, a startup providing cloud data loss prevention services, today announced that it raised $40 million in Series B financing from investors including WestBridge Capital, Venrock, Bain Capital Ventures and -- for some reason -- athletes and celebrities including Paul Rudd, Drew Brees and Josh Childress. CEO Isaac Madan says that the proceeds will be put toward doubling Nightfall's 60-person headcount, scaling the platform to more customers and markets, and expanding Nightfall's partner ecosystem. Isaac was previously a VC investor at Venrock, where he focused on early-stage investments in software as a service, security and machine learning. Rohan was one of the founding engineers at Uber Eats, where he designed and built software to grow the platform's footprint. Madan says he and Sathe were inspired to launch Nightfall by Sathe's personal experiences with data breaches arising from poor "data security hygiene."


Opaque Systems helps enterprises run collaborative analytics on confidential data - TheSpuzz

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. San Francisco-based Opaque Systems, a company enabling collaborative analytics and AI for confidential computing, today announced it has raised $22 million in a series A round of funding. Confidential computing has been a game-changer for enterprises. It encrypts sensitive data in a protected CPU enclave or trusted execution environment (TEE), giving companies a way to move beyond policy-based privacy and security to safeguard their information in the cloud. However, with this level of encryption, which can only be unlocked with keys held by the client, multiple parties struggle to access, share, analyze and run AI/ML on the data in question.


Three opportunities of Digital Transformation: AI, IoT and Blockchain

#artificialintelligence

Koomey's law This law posits that the energy efficiency of computation doubles roughly every one-and-a-half years (see Figure 1–7). In other words, the energy necessary for the same amount of computation halves in that time span. To visualize the exponential impact this has, consider the face that a fully charged MacBook Air, when applying the energy efficiency of computation of 1992, would completely drain its battery in a mere 1.5 seconds. According to Koomey's law, the energy requirements for computation in embedded devices is shrinking to the point that harvesting the required energy from ambient sources like solar power and thermal energy should suffice to power the computation necessary in many applications. Metcalfe's law This law has nothing to do with chips, but all to do with connectivity. Formulated by Robert Metcalfe as he invented Ethernet, the law essentially states that the value of a network increases exponentially with regard to the number of its nodes (see Figure 1–8).


MixMode lands $45M for self-learning security platform that combats zero days

#artificialintelligence

Did you miss a session at the Data Summit? MixMode, which today announced a $45 million series B funding round, has a massive opportunity ahead to deploy its self-learning, "third-wave" AI system to proactively secure customers against previously unknown cyberattacks, CEO John Keister told VentureBeat. A significant portion of the hundreds of billions of dollars spent each year on cybersecurity is focused on signature-based solutions, which only protect against the 20% of successful attacks that had previously been seen, Keister said. But the other 80% of cyberattacks (according to figures from the Ponemon Institute) are novel attacks -- and identification of those requires advanced AI capabilities, he said. "The existing systems simply don't address that 80%," Keister said.


The Top 5 Technology Trends for 2022: The Year of Decentralisation

#artificialintelligence

Last year, I coined 2021 the Year of Digitalism as I foresaw the increase of corporate and governmental data surveillance. Unfortunately, it is safe to say that this has come true with Big Tech becoming more powerful than ever before and governments worldwide implementing Covid tracking apps. What also happened is that the Pandemic has been a strong catalyst for digital transformation in any sector and that the world is currently changing at lightning speed. There are economic changes such as increasing inflation rates, environmental disasters caused by climate change, social changes such as The Great Resignation, and a convergence of technologies that drives technological changes. Although the world has never changed so fast as in 2021, this year was also the most stable of all the years to come in this decade.


Technology Ethics in Action: Critical and Interdisciplinary Perspectives

arXiv.org Artificial Intelligence

This special issue interrogates the meaning and impacts of "tech ethics": the embedding of ethics into digital technology research, development, use, and governance. In response to concerns about the social harms associated with digital technologies, many individuals and institutions have articulated the need for a greater emphasis on ethics in digital technology. Yet as more groups embrace the concept of ethics, critical discourses have emerged questioning whose ethics are being centered, whether "ethics" is the appropriate frame for improving technology, and what it means to develop "ethical" technology in practice. This interdisciplinary issue takes up these questions, interrogating the relationships among ethics, technology, and society in action. This special issue engages with the normative and contested notions of ethics itself, how ethics has been integrated with technology across domains, and potential paths forward to support more just and egalitarian technology. Rather than starting from philosophical theories, the authors in this issue orient their articles around the real-world discourses and impacts of tech ethics--i.e., tech ethics in action.


Systems Challenges for Trustworthy Embodied Systems

arXiv.org Artificial Intelligence

A new generation of increasingly autonomous and self-learning systems, which we call embodied systems, is about to be developed. When deploying these systems into a real-life context we face various engineering challenges, as it is crucial to coordinate the behavior of embodied systems in a beneficial manner, ensure their compatibility with our human-centered social values, and design verifiably safe and reliable human-machine interaction. We are arguing that raditional systems engineering is coming to a climacteric from embedded to embodied systems, and with assuring the trustworthiness of dynamic federations of situationally aware, intent-driven, explorative, ever-evolving, largely non-predictable, and increasingly autonomous embodied systems in uncertain, complex, and unpredictable real-world contexts. We are also identifying a number of urgent systems challenges for trustworthy embodied systems, including robust and human-centric AI, cognitive architectures, uncertainty quantification, trustworthy self-integration, and continual analysis and assurance.


Forecasting: theory and practice

arXiv.org Machine Learning

Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life challenges. This article provides a non-systematic review of the theory and the practice of forecasting. We provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts. We do not claim that this review is an exhaustive list of methods and applications. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of forecasting theory and practice. Given its encyclopedic nature, the intended mode of reading is non-linear. We offer cross-references to allow the readers to navigate through the various topics. We complement the theoretical concepts and applications covered by large lists of free or open-source software implementations and publicly-available databases.


Ethical and social risks of harm from Language Models

arXiv.org Artificial Intelligence

This paper aims to help structure the risk landscape associated with large-scale Language Models (LMs). In order to foster advances in responsible innovation, an in-depth understanding of the potential risks posed by these models is needed. A wide range of established and anticipated risks are analysed in detail, drawing on multidisciplinary expertise and literature from computer science, linguistics, and social sciences. We outline six specific risk areas: I. Discrimination, Exclusion and Toxicity, II. Information Hazards, III. Misinformation Harms, V. Malicious Uses, V. Human-Computer Interaction Harms, VI. Automation, Access, and Environmental Harms. The first area concerns the perpetuation of stereotypes, unfair discrimination, exclusionary norms, toxic language, and lower performance by social group for LMs. The second focuses on risks from private data leaks or LMs correctly inferring sensitive information. The third addresses risks arising from poor, false or misleading information including in sensitive domains, and knock-on risks such as the erosion of trust in shared information. The fourth considers risks from actors who try to use LMs to cause harm. The fifth focuses on risks specific to LLMs used to underpin conversational agents that interact with human users, including unsafe use, manipulation or deception. The sixth discusses the risk of environmental harm, job automation, and other challenges that may have a disparate effect on different social groups or communities. In total, we review 21 risks in-depth. We discuss the points of origin of different risks and point to potential mitigation approaches. Lastly, we discuss organisational responsibilities in implementing mitigations, and the role of collaboration and participation. We highlight directions for further research, particularly on expanding the toolkit for assessing and evaluating the outlined risks in LMs.


Differentially Private Ensemble Classifiers for Data Streams

arXiv.org Machine Learning

To further add to the challenge, data streams from many domains involve sensitive, personal information about contributing Learning from continuous data streams via classification/regression users, such as patients' records and user data in mobile applications, is prevalent in many domains. Adapting to evolving data characteristics protection of which is of paramount interest. While concept (concept drift) while protecting data owners' private information drift and privacy have been extensively studied in isolation, works is an open challenge. We present a differentially private considering both are in infancy. See more discussion in Section ensemble solution to this problem with two distinguishing features: 2. In this work, our goal is to allow machine learning models to it allows an unbounded number of ensemble updates to deal with deal with concept drift when training on potentially never-ending the potentially never-ending data streams under a fixed privacy data streams involving sensitive data, where the model(s) learned budget, and it is model agnostic, in that it treats any pre-trained can be published without disclosing sensitive information.