Many methods for automated software test generation, including some that explicitly use machine learning (and some that use ML more broadly conceived) derive new tests from existing tests (often referred to as seeds). Often, the seed tests from which new tests are derived are manually constructed, or at least simpler than the tests that are produced as the final outputs of such test generators. We propose annotation of generated tests with a provenance (trail) showing how individual generated tests of interest (especially failing tests) derive from seed tests, and how the population of generated tests relates to the original seed tests. In some cases, post-processing of generated tests can invalidate provenance information, in which case we also propose a method for attempting to construct "pseudo-provenance" describing how the tests could have been (partly) generated from seeds.
Demand is growing for more accountability in the technological systems that increasingly occupy our world. However, the complexity of many of these systems - often systems of systems - poses accountability challenges. This is because the details and nature of the data flows that interconnect and drive systems, which often occur across technical and organisational boundaries, tend to be opaque. This paper argues that data provenance methods show much promise as a technical means for increasing the transparency of these interconnected systems. Given concerns with the ever-increasing levels of automated and algorithmic decision-making, we make the case for decision provenance. This involves exposing the 'decision pipeline' by tracking the chain of inputs to, and flow-on effects from, the decisions and actions taken within these systems. This paper proposes decision provenance as a means to assist in raising levels of accountability, discusses relevant legal conceptions, and indicates some practical considerations for moving forward.
Hoy traemos a este espacio esta conferencia titulada "Mind Blowing Tech in Learning: AI, VR, and AR featuring" del Prof. Donald Clark, del Center for Online Innovation in Learning y que nos presentan así: Artificial intelligence (AI) is now the most potent force in IT and will shape learning technology, allowing us to escape from the 30 year paradigm of flat, linear e-learning. During this COIL Fischer Speaker Series presentation, Professor Donald Clark debunks some myths about AI and provide real examples of AI used now in content creation, feedback, assessment and spaced practice. In addition he will talk about virtual reality (VR) & augmented reality (AR) as reviving'learning by doing' and their power to democratize experience. Donald Clark is an EdTech entrepreneur and was CEO and one of the original founders of Epic Group plc, which established itself as the leading company in the UK online learning market, floated on the Stock Market in 1996 and sold in 2005, now CEO of Wildfire Ltd. he also invests in, and advises, EdTech companies. Describing himself as'free from the tyranny of employment', he is a board member of Cogbooks, LearningPool, WildFire and Deputy Chair of Brighton Dome & Arts Festival as well as a Visiting Professor at The University of Derby and Fellow of the Royal Society of Arts (FRSA).