The emergence and continued reliance on the Internet and related technologies has resulted in the generation of large amounts of data that can be made available for analyses. However, humans do not possess the cognitive capabilities to understand such large amounts of data. Machine learning (ML) provides a mechanism for humans to process large amounts of data, gain insights about the behavior of the data, and make more informed decision based on the resulting analysis. ML has applications in various fields. This review focuses on some of the fields and applications such as education, healthcare, network security, banking and finance, and social media. Within these fields, there are multiple unique challenges that exist. However, ML can provide solutions to these challenges, as well as create further research opportunities. Accordingly, this work surveys some of the challenges facing the aforementioned fields and presents some of the previous literature works that tackled them. Moreover, it suggests several research opportunities that benefit from the use of ML to address these challenges.
Elements of AI is a free online course that launched in Ireland last month and is designed to be accessible from the beginning. At Future Human last month, it was announced that the University of Helsinki has teamed up with University College Cork to bring artificial intelligence "into the sitting rooms and kitchens of Irish homes". This will be through Elements of AI, a free online course designed and organised by the University of Helsinki and Finnish tech company Reaktor. Prof Teemu Roos of the University of Helsinki told the Future Human audience that the course was designed to be accessible from the beginning, starting with what artificial intelligence is and how we encounter it every day, before moving onto how it works and what the basic principles are. "There's no programming in the course, you don't have to know any programming to get started and complete the course. There's hardly any mathematics," he said.
The birth of massive open online courses (MOOCs) has had an undeniable effect on how teaching is being delivered. It seems that traditional in class teaching is becoming less popular with the young generation, the generation that wants to choose when, where and at what pace they are learning. As such, many universities are moving towards taking their courses, at least partially, online. However, online courses, although very appealing to the younger generation of learners, come at a cost. For example, the dropout rate of such courses is higher than that of more traditional ones, and the reduced in person interaction with the teachers results in less timely guidance and intervention from the educators. Machine learning (ML) based approaches have shown phenomenal successes in other domains. The existing stigma that applying ML based techniques requires a large amount of data seems to be a bottleneck when dealing with small scale courses with limited amounts of produced data. In this study, we show not only that the data collected from an online learning management system could be well utilized in order to predict students overall performance but also that it could be used to propose timely intervention strategies to boost the students performance level. The results of this study indicate that effective intervention strategies could be suggested as early as the middle of the course to change the course of students progress for the better. We also present an assistive pedagogical tool based on the outcome of this study, to assist in identifying challenging students and in suggesting early intervention strategies.
The elements of AI is a free online course for everyone interested in learning what AI is, what is possible (and not possible) with AI, and how it affects our lives – with no complicated math or programming required. By completing the course you can earn a LinkedIn certificate. People in Finland can also earn 2 ECTS credits through the Open University. The course is available from May 14, 2018.
With the emergence of e-learning and personalised education, the production and distribution of digital educational resources have boomed. Video lectures have now become one of the primary modalities to impart knowledge to masses in the current digital age. The rapid creation of video lecture content challenges the currently established human-centred moderation and quality assurance pipeline, demanding for more efficient, scalable and automatic solutions for managing learning resources. Although a few datasets related to engagement with educational videos exist, there is still an important need for data and research aimed at understanding learner engagement with scientific video lectures. This paper introduces VLEngagement, a novel dataset that consists of content-based and video-specific features extracted from publicly available scientific video lectures and several metrics related to user engagement. We introduce several novel tasks related to predicting and understanding context-agnostic engagement in video lectures, providing preliminary baselines. This is the largest and most diverse publicly available dataset to our knowledge that deals with such tasks. The extraction of Wikipedia topic-based features also allows associating more sophisticated Wikipedia based features to the dataset to improve the performance in these tasks. The dataset, helper tools and example code snippets are available publicly at https://github.com/sahanbull/context-agnostic-engagement
Way2AI is a group of enthusiasts and specialists in AI & Machine Learning, created by Long Nguyen, PhD in AI (France), aiming at teaching people learning about this emerging technology. AI is really changing the world! Almost every domain can benefit from the power of AI, from business, healthcare, to transport, entertainment, and military etc. There are more and more investments in AI but the domain still lacks of qualified employees. Therefore we really hope that our contribution can help many people find a fast and easy way in approaching AI.
Graph neural networks (GNNs) learn representations from network data with naturally distributed architectures, rendering them well-suited candidates for decentralized learning. Oftentimes, this decentralized graph support changes with time due to link failures or topology variations. These changes create a mismatch between the graphs on which GNNs were trained and the ones on which they are tested. Online learning can be used to retrain GNNs at testing time, overcoming this issue. However, most online algorithms are centralized and work on convex problems (which GNNs rarely lead to). This paper proposes the Wide and Deep GNN (WD-GNN), a novel architecture that can be easily updated with distributed online learning mechanisms. The WD-GNN comprises two components: the wide part is a bank of linear graph filters and the deep part is a GNN. At training time, the joint architecture learns a nonlinear representation from data. At testing time, the deep part (nonlinear) is left unchanged, while the wide part is retrained online, leading to a convex problem. We derive convergence guarantees for this online retraining procedure and further propose a decentralized alternative. Experiments on the robot swarm control for flocking corroborate theory and show potential of the proposed architecture for distributed online learning.
In display advertising, a small group of sellers and bidders face each other in up to 10 12 auctions a day. In this context, revenue maximisation via monopoly price learning is a high-value problem for sellers. By nature, these auctions are online and produce a very high frequency stream of data. This results in a computational strain that requires algorithms be real-time. Unfortunately, existing methods inherited from the batch setting suffer O($\sqrt t$) time/memory complexity at each update, prohibiting their use. In this paper, we provide the first algorithm for online learning of monopoly prices in online auctions whose update is constant in time and memory.
I started in Data Science back in 2015. It was not an intended move but the answer to the needs of my employer. I was working for a company providing automation services to Spanish corporations and we had the need to leverage data to automate complex tasks whose rules could not be easily hard-coded. I had recently graduated as an engineer in the middle of a terrible economic crisis, had some statistical modeling knowledge and was proficient using MATLAB. In 2015 there was not specialized Data Science degrees or boot-camps to jump-start in the field (at least, in Spain) and the naturally closest studies you could have were, in this order: Mathematics (in Spain with a strong focus in becoming a teacher/professor in the public education system) or Software Engineer (most of them more interested in App Development or creating the new Uber of "X" than in boring Data Science stuff back then).
Kennisnet Technology Compass 2019-2020, y que comienza así: Please note: This report is written from a Dutch perspective and with the Dutch educational system and its structure in mind. Please take this into account when reading this report. What will you find in this technology compass? If someone had told you 25 years ago – roughly at the time the internet started to rise – that in 2019, you would be swiping on your smartphone for multiple hours a day, and that thanks to the internet you'd know exactly what time your aunt in France was drinking her latte, or that teenagers could become drone pilots during their vocational studies, would you have believed that person? Probably not, as nobody can predict the future.