Calgary
nick lally // art, geography, software » Blog Archive » geographies of software, AAG 2017
A variety of technologies have emerged in the last decade that make it easier and cheaper than ever before to make representations of everyday mobile embodiment. Increasing numbers of people are quantifying and self-tracking their everyday lives recording behavioural, biological and environmental data (Beer, 2016; Neff & Nafus, 2016) using a variety of technologies, for example: • lightweight wearable cameras such as the GoPro allowing users to record footage of their most banal everyday activities; • devices such as the Fitbit and Apple Watch bringing continuous physiological monitoring out of the medical realm and into mainstream culture; • apps like Strava allowing people to quantify their cycling, running and walking activities; • lightweight devices for measuring brain activity (EEG) and stimulation (EDA) becoming sufficiently robust and discreet to be used in non-lab environments. None of the underlying technologies are novel, but as they are made accessible in cheaper and more user-friendly packages, new techniques and sources of data are becoming more readily available for geographical analysis. Engagement with these technologies has created a rapidly expanding area of investigation within geography. The emergence of the quantified-self poses both opportunities and dilemmas for geographical thought. We wish to move past simplistic protests that dismiss such technology as offering another take on Haraway's (1988) 'god trick', presenting partial, and highly situated data as objective truth. Instead, this session will build on the potential identified by Delyser and Sui (2013) to take more inventive approaches toward mobile methods. The focus will be on how these technologies can be engaged with by critical geographers to bring new perspectives to their analysis of everyday embodiment.
Machine learning could help companies react faster to ransomware
File-encrypting ransomware programs have become one of the biggest threats to corporate networks worldwide and are constantly evolving by adding increasingly sophisticated detection-evasion and propagation techniques. In a world where any self-respecting malware author makes sure that his creations bypass antivirus detection before releasing them, enterprise security teams are forced to focus on improving their response times to infections rather than trying to prevent them all, which is likely to be a losing game. Exabeam, a provider of user and entity behavior analytics, believes that machine-learning algorithms can significantly improve ransomware detection and reaction time, preventing such programs from spreading inside the network and affecting a larger number of systems. Because the decryption price asked by ransomware authors is calculated per system, isolating affected computers as soon as possible is critical. Only last week the University of Calgary announced that it paid 20,000 Canadian dollars (around US 15,600) to ransomware authors to get the decryption keys for multiple systems.
Machine learning could help companies react faster to ransomware
In a world where any self-respecting malware author makes sure that his creations bypass antivirus detection before releasing them, enterprise security teams are forced to focus on improving their response times to infections rather than trying to prevent them all, which is likely to be a losing game. Exabeam, a provider of user and entity behavior analytics, believes that machine-learning algorithms can significantly improve ransomware detection and reaction time, preventing such programs from spreading inside the network and affecting a larger number of systems. Because the decryption price asked by ransomware authors is calculated per system, isolating affected computers as soon as possible is critical. Only last week the University of Calgary announced that it paid 20,000 Canadian dollars (around US 15,600) to ransomware authors to get the decryption keys for multiple systems. Exabeam's Analytics for Ransomware, a new product that was announced today, uses the company's existing behavior analytics technology to detect ransomware infections shortly after they occur.
Machine learning could help companies react faster to ransomware
File-encrypting ransomware programs have become one of the biggest threats to corporate networks worldwide and are constantly evolving by adding increasingly sophisticated detection-evasion and propagation techniques. In a world where any self-respecting malware author makes sure that his creations bypass antivirus detection before releasing them, enterprise security teams are forced to focus on improving their response times to infections rather than trying to prevent them all, which is likely to be a losing game. Exabeam, a provider of user and entity behavior analytics, believes that machine-learning algorithms can significantly improve ransomware detection and reaction time, preventing such programs from spreading inside the network and affecting a larger number of systems. Because the decryption price asked by ransomware authors is calculated per system, isolating affected computers as soon as possible is critical. Only last week the University of Calgary announced that it paid 20,000 Canadian dollars (around US 15,600) to ransomware authors to get the decryption keys for multiple systems.
Machine learning could help companies react faster to ransomware
File-encrypting ransomware programs have become one of the biggest threats to corporate networks worldwide and are constantly evolving by adding increasingly sophisticated detection-evasion and propagation techniques. In a world where any self-respecting malware author makes sure that his creations bypass antivirus detection before releasing them, enterprise security teams are forced to focus on improving their response times to infections rather than trying to prevent them all, which is likely to be a losing game. Exabeam, a provider of user and entity behavior analytics, believes that machine-learning algorithms can significantly improve ransomware detection and reaction time, preventing such programs from spreading inside the network and affecting a larger number of systems. Because the decryption price asked by ransomware authors is calculated per system, isolating affected computers as soon as possible is critical. Only last week the University of Calgary announced that it paid 20,000 Canadian dollars (around US 15,600) to ransomware authors to get the decryption keys for multiple systems.
Calgary neuroscientist leading the way in robotic surgery
Larry Doherty was in good hands, steady hands, like the metal ones you can find on an automaker's assembly line. The 64-year-old bean salesman from Bow Island, Alta., had come to the University of Calgary's Department of Clinical Neurosciences and Hotchkiss Brain Institute to undergo arteriovenous malformation surgery – to untie the tangled blood vessels in his brain. When everyone in the operating room was ready, the operating surgeon began his work sitting in a whole other room surrounded by computer monitors, including one with a 3-D image of Mr. Doherty's brain. Using specially designed hand controls, Dr. Garnette Sutherland manoeuvred the robot to its ready position. For Mr. Doherty, it was the first time in his life he had undergone surgery.
The Gold Standard: Automatically Generating Puzzle Game Levels
Williams-King, David (University of Calgary) | Denzinger, Jörg (University of Calgary) | Aycock, John (University of Calgary) | Stephenson, Ben (University of Calgary)
KGoldrunner is a puzzle-oriented platform game with dynamic elements. This paper describes Goldspinner, an automatic level generation system for KGoldrunner. Goldspinner has two parts: a genetic algorithm that generates candidate levels, and simulations that use an AI agent to attempt to solve the level from the player's perspective. Our genetic algorithm determines how "good" a candidate level is by examining many different properties of the level, all based on its static aspects. Once the genetic algorithm identifies a good candidate, simulations are performed to evaluate the dynamic aspects of the level. Levels that are statically good may not be dynamically good (or even solvable), making simulation an essential aspect of our level generation system. By carefully optimizing our genetic algorithm and simulation agent we have created an efficient system capable of generating interesting levels in real time.
On Prediction Using Variable Order Markov Models
Begleiter, R., El-Yaniv, R., Yona, G.
This paper is concerned with algorithms for prediction of discrete sequences over a finite alphabet, using variable order Markov models. The class of such algorithms is large and in principle includes any lossless compression algorithm. We focus on six prominent prediction algorithms, including Context Tree Weighting (CTW), Prediction by Partial Match (PPM) and Probabilistic Suffix Trees (PSTs). We discuss the properties of these algorithms and compare their performance using real life sequences from three domains: proteins, English text and music pieces. The comparison is made with respect to prediction quality as measured by the average log-loss. We also compare classification algorithms based on these predictors with respect to a number of large protein classification tasks. Our results indicate that a "decomposed" CTW (a variant of the CTW algorithm) and PPM outperform all other algorithms in sequence prediction tasks. Somewhat surprisingly, a different algorithm, which is a modification of the Lempel-Ziv compression algorithm, significantly outperforms all algorithms on the protein classification problems.
A Belief Revision Framework for Revising Epistemic States with Partial Epistemic States
Ma, Jianbing (Queen's University of Belfast) | Liu, Weiru (Queen's University of Belfast) | Benferhat, Salem
Belief revision performs belief change on an agent's beliefs when new evidence (either of the form of a propositional formula or of the form of a total pre-order on a set of interpretations) is received. Jeffrey's rule is commonly used for revising probabilistic epistemic states when new information is probabilistically uncertain. In this paper, we propose a general epistemic revision framework where new evidence is of the form of a partial epistemic state. Our framework extends Jeffrey's rule with uncertain inputs and covers well-known existing frameworks such as ordinal conditional function (OCF) or possibility theory. We then define a set of postulates that such revision operators shall satisfy and establish representation theorems to characterize those postulates. We show that these postulates reveal common characteristics of various existing revision strategies and are satisfied by OCF conditionalization, Jeffrey's rule of conditioning and possibility conditionalization. Furthermore, when reducing to the belief revision situation, our postulates can induce most of Darwiche and Pearl's postulates.
On Prediction Using Variable Order Markov Models
Begleiter, R., El-Yaniv, R., Yona, G.
This paper is concerned with algorithms for prediction of discrete sequences over a finite alphabet, using variable order Markov models. The class of such algorithms is large and in principle includes any lossless compression algorithm. We focus on six prominent prediction algorithms, including Context Tree Weighting (CTW), Prediction by Partial Match (PPM) and Probabilistic Suffix Trees (PSTs). We discuss the properties of these algorithms and compare their performance using real life sequences from three domains: proteins, English text and music pieces. The comparison is made with respect to prediction quality as measured by the average log-loss. We also compare classification algorithms based on these predictors with respect to a number of large protein classification tasks. Our results indicate that a ``decomposed'' CTW (a variant of the CTW algorithm) and PPM outperform all other algorithms in sequence prediction tasks. Somewhat surprisingly, a different algorithm, which is a modification of the Lempel-Ziv compression algorithm, significantly outperforms all algorithms on the protein classification problems.