AI's next frontier: AlphaCode can match programming prowess of average coders

#artificialintelligence 

Artificial intelligence software programs are becoming shockingly adept at carrying on conversations, winning board games and generating artwork -- but what about creating software programs? In a newly published paper, researchers at Google DeepMind say their AlphaCode program can keep up with the average human coder in standardized programming contests. "This result marks the first time an artificial intelligence system has performed competitively in programming contests," the researchers report in this week's issue of the journal Science. There's no need to sound the alarm about Skynet just yet: DeepMind's code-generating system earned an average ranking in the top 54.3% in simulated evaluations on recent programming competitions on the Codeforces platform -- which is a very "average" average. "Competitive programming is an extremely difficult challenge, and there's a massive gap between where we are now (solving around 30% of problems in 10 submissions) and top programmers (solving 90% of problems in a single submission)," DeepMind research scientist Yujia Li, one of the Science paper's principal authors, told GeekWire in an email.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found