Rosa recently took steps to scale up the research on general AI by founding the AI Roadmap Institute and launching the General AI Challenge. In some rounds, participants will be tasked with designing algorithms and programming AI agents. The Challenge kicked off on 15 February with a six-month "warm-up" round dedicated to building gradually learning AI agents. The tasks were specifically designed to test gradual learning potential, so they can serve as guidance for the developers.
Last week, an artificial intelligence bot created by the Elon Musk-backed start-up OpenAI defeated some of the world's most talented players of Dota 2, a fast-paced, highly complex, multiplayer online video game that draws fierce competition from all over the globe. Danylo "Dendi" Ishutin, one of the game's top players, was defeated twice by his AI competition, which felt "a little like human, but a little like something else," he said, according to the Verge. Tesla chief executive Elon Musk hailed the bot's achievement in historic fashion on Twitter before going on to once again express his concerns about artificial intelligence, which he said poses "vastly more risk than North Korea." Vastly more complex than traditional board games like chess & Go.
I chose to work on one of the datasets suggested by DeepGram: EEG readings from a Stanford research project that predicted which category of images their test subjects were viewing using linear discriminant analysis. Winning Kaggle competition teams have successfully applied artificial neural networks on EEG data (see first place winner of the grasp-and-lift challenge and third place winner of seizure prediction competition). The six major categories of images shown to test subjects were: human body, human face, animal body, animal face, natural object, and man-made object. The two plots below show the training history of the CNN model's accuracy and categorical cross entropy loss on the test data set as well as the holdout data set (labeled as "validation" in the plots).
On the image classification challenge, the prize went to a team called WMW, which included two experts from Beijing-based startup Momenta and another one from Oxford University. Another team called DBAT won the title in the object detection challenge, with an accuracy rate at 73.1 percent. Since its launch in 2010, the ImageNet Large Scale Visual Recognition Challenge has become a benchmark AI competition in object category classification and detection on hundreds of object categories and millions of images. In 2016, Trimps-Soushen, a team supported by the Ministry of Public Security, won in object recognition and detection, while researchers from Nanjing University of Information Science and Technology won in the video identification task.
This learning path would be extremely useful for any one who wants to learn machine learning, deep learning or data science in this year. Moreover coding in hackathons brings you closer to developing data products in real life for solving real world problems. There are a few specific machine learning algorithms, which come in handy while solving specific problems. For example, try solving online click prediction on large data sets with out applying online learning algorithms and you would know what I am talking about.
A three-day hackathon on campus brought together students and researchers from MIT and around Boston who developed functional fabric concepts to solve major issues facing soldiers in combat or training, first responders, victims and workers in refugee camps, and many others. Remote Triage, formed by MIT students, designed an automated triage system for field medics, consisting of sensor-laden clothing that detects potential injury and a web platform that prioritizes care. The other team, Security Blanket, designed a double-sided, multipurpose blanket for people displaced from their homes, based on an idea from a Drexel University student. On Friday night, hackathon participants listened to talks from various experts -- including military officers, first responders, and government representatives -- who described major challenges they face in their fields.
NEW DELHI: Indian Institute of Technology (IIT) Delhi is hosting a week long hackathon on artificial intelligence starting Friday (July 28 – August 4). This hackathon is being led by Anshul Bhagi, an MIT/Harvard alumnus who is also the co-director of OpenEd.ai, a non-profit organisation committed to developing and promoting open-source AI for education. Among other partners for the event are Niti Aayog, Omidyar Network (venture capital firm), IBM, Amazon Web Services, Google Developer Groups, Bhagi shared. These organizations are offering a total of $17,000 as prize money for participants," Bhagi said.
Launched in 2016, SC2's goal is to create a collaborative machine-learning competition to address radio frequency (RF) spectrum challenges. It can calculate in real-time more than 65,000 channel interactions among 256 wireless devices and can emulate thousands of interactions between all types of wireless devices that include the Internet of Things (IoT), cellphones, military radios, and the like. In short, said Manuel Uhm – director of marketing, Ettus Research, a NI company, and chair of the Board of Directors of the Wireless Innovation Forum – at NI Week: "This huge effort will enable warfighters and their radios to have spectrum situational awareness through the ma-chine learning and AI [artificial intelligence] capabilities." Uhm explained: "If you place the intelligence in the cloud, you have a number of dummy nodes, whether they're radios or some other type of sensor node that is just essentially passing sensor data to the cloud.
Julian and I independently wrote summaries of our solution to the 2017 Data Science Bowl. A tricky detail that I found reading the LUNA competition is that different CT machines will produce scans with different sampling rates in the 3rd dimension. Here's an example of a malignant nodule (highlighted in blue): Anyway, the LUNA16 dataset had some very crucial information - the locations in the LUNA CT scans of 1200 nodules. It's the reason that I am able to build models on only 1200 samples (nodules) and have them work very well (normal computer vision datasets have 10,000 - 10,000,000 images).
When artificial intelligence (AI) is discussed today, most people are referring to machine learning algorithms or deep learning systems. The first (non-targeted adversarial attack) involves getting algorithms to confuse a machine learning system so it won't work properly. "It's a brilliant idea to catalyze research into both fooling deep neural networks and designing deep neural networks that cannot be fooled," Jeff Clune, a University of Wyoming assistant professor whose own work involves studying the limits of machine learning systems, told the MIT Technology Review. "Computer security is definitely moving toward machine learning," Google Brain researcher Ian Goodfellow told the MIT Technology Review.