The Game-Playing AI Does Not Always Win, It Turns Out
Players have often used KataGo to test their skills, train for other matches, and even analyze past games, yet in a study posted recently on the preprint server arXiv, researchers report that by using an adversarial policy--a kind of machine-learning algorithm built to attack or learn weaknesses in other systems--they've been able to beat KataGo at its own game between 50 to 99 percent of the time, depending on how much "thinking ahead" the AI does. "KataGo is able to recognize that passing would result in a forced win by our adversary, but given a low tree-search budget it does not have the foresight to avoid this," co-author Tony Wang, a Ph.D. student at MIT said of the study on the site LessWrong, an online community dedicated to "causing safe and beneficial AI."
Dec-22-2022, 06:50:10 GMT
- Technology: