We shouldn't have to say this, but here's a reminder: girls can play some serious football. Take Kelly Macnamara, for instance. According to USA Today's FTW, the soccer player didn't have any football experience when she joined the North Penn High School team as kicker, but she's kicking ass anyway. Please observe her taking down an opponent who's about to run away with a touchdown -- an opponent, to be honest, who never stood a chance. SEE ALSO: This could be the world's most painful football celebration
There are a few ways to tell if the Hearthstone player you're up against is top-tier Legend or just some scrub stuck at rank 15. You queue up your Midrange Hunter deck, press the "Play Standard" button and watch that wheel spin, hoping it lands on "Mediocre Monk" instead of "Worthy Opponent". Then you go up against a Warrior deck that does everything in your power to piss you off. They rope, make plays in the wrong order and spam that threaten emote just a bit too much. I've seen too many Hearthstone players like this before and I can tell if someone deserves that Legend card back, or just got lucky when Secret Paladin was still a thing.
CAPTCHA has been widely deployed by commercial web sites as a security technology for purposes such as anti-spam. A common approach to evaluating the robustness of CAPTCHA is the use of machine learning techniques. Critical to this approach is the acquisition of an adequate set of labeled samples, on which the learning techniques are trained. However, such a sample labeling task is difficult for computers, since the strength of CAPTCHAs stems exactly from the difficulty computers have in recognizing either distorted texts or image contents. Therefore, until now, researchers have to manually label their samples, which is tedious and expensive. In this paper, we present Magic Bullet, a computer game that for the first time turns such sample labeling into a fun experience, and that achieves a labeling accuracy of as high as 98% for free. The game leverages human computation to address a task that cannot be easily automated, and it effectively streamlines the evaluation of CAPTCHAs. The game can also be used for other constructive purposes such as 1) developing better machine learning algorithms for handwriting recognition, and 2) training people’s typing skills.
An important aspect of agent evaluation in stochastic games, especially poker, is the need to reduce the outcome variance in order to get accurate and significant results. The current method used in the Annual Computer Poker Competition’s analysis is that of duplicate poker, an approach that leverages the ability to deal sets of cards to agents in order to reduce variance. This work explores a different approach to variance reduction by using a control variate based approach known as baseline. The baseline approach involves using an agent’s outcome in self play to create an unbiased estimator for use in agent evaluation and has been shown to work well in both poker and trading agent competition domains. Base- line does not require that the agents are able to be dealt sets of cards, making it a more robust technique than duplicate. This approach is compared to the current duplicate method, as well as other variations of duplicate poker on the results of the 2011 two player no-limit and three player limit Texas Hold’em ACPC tournaments.
The "Elo" rating system is a method most famous for ranking chess players, but which has now spread to many other sports and games. How Elo works is like this: when you start out in competitive chess, the federation assigns you an arbitrary rating -- either a standard starting rating (which I think is 1200), or one based on an estimate of your skill. Your rating then changes as you play. What I gather from Wikipedia is that "master" starts at a rating of about 2300, and "grandmaster" around 2500. To get from the original 1200 up to the 2300 level, you just start winning games.