Elon Musk's plans for mind-controlled gadgets: what we know so far

New Scientist

Elon Musk's brain-computer interface company Neuralink has finally broken its silence. Since the company was formed in 2016, it has kept its plans secret, but in a presentation on Tuesday night it showed off its vision and explained what the firm has done so far. At the event, the company unveiled a brain-computer interface – a technology that allows machines to read brain activity. Neuralink says its device will have around 3000 surgically implanted electrodes, each of which will be able monitor around 1000 neurons at a time. The electrodes will be attached to around 100 extremely thin threads, between 4 and 6 micrometres wide, which is much less than the width of a hair.

Elon Musk's Neuralink unveils effort to build implant that can read your mind

The Guardian

Elon Musk's secretive "brain-machine interface" startup, Neuralink, stepped out of the shadows on Tuesday evening, revealing its progress in creating a wireless implantable device that can – theoretically – read your mind. At an event at the California Academy of Sciences in San Francisco, Musk touted the startup's achievements since he founded it in 2017 with the goal of staving off what he considers to be an "existential threat": artificial intelligence (AI) surpassing human intelligence. Two years later, Neuralink claims to have achieved major advances toward Musk's goal of having human and machine intelligence work in "symbiosis". Neurolink says it has designed very small "threads" – smaller than a human hair – that can be injected into the brain to detect the activity of neurons. It also says it has developed a robot to insert those threads in the brain, under the direction of a neurosurgeon.

Guide to Machine Learning with ML 1.0


As a person coming from .NET world, it was quite hard to get into machine learning right away. One of the main reasons was the fact that I couldn't start Visual Studio and try out these new things in the technologies I am proficient with. I had to solve another obstacle and learn other programming languages more fitting for the job like Python and R. You can imagine my happiness when more than a year ago, Microsoft announced that as a part of .NET Core 3, a new feature will be available – ML.NET. In fact it made me so happy that this is the third time I write similar guide. Basically, I wrote one when ML.NET was a version 0.2 and one when it was version 0.10. Both times, guys from Microsoft decided to modify the API and make my articles obsolete. That is why I have to do it once again.

A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors


This paper introduces a la carte embed-ding, a simple and general alternative to the usual word2vec-based approaches for building such representations that is based upon recent theoretical results for GloVe-like embeddings. Our method relies mainly on a linear transfor-mation that is efficiently learnable using pretrained word vectors and linear regression. This transform is applicable on the fly in the future when a new text feature or rare word is encountered, even if only a single usage example is available. We introduce a new dataset showing how the a la carte method requires fewer examples of words in con-text to learn high-quality embeddings and we obtain state-of-the-art results on a nonce task and some unsupervised document classification tasks.

Big Ideas in AI for the Next 10 Years


Summary: Despite our concerns about China taking the lead in AI, our own government efforts mostly through DARPA continue powerful leadership and funding to maintain our lead. Here's their plan to maintain that lead over the next decade. Think all those great ideas that have powered AI/ML for the last 10 years came from Silicon Valley and a few universities? Hard as it may be to admit it's the seed money in the billions that our government has spent that got pretty much all of these breakthroughs to the doorway of commercial acceptability. Dozens of articles bemoan the huge investments that China is making in AI with the threat that they will pull ahead.

XGBoost and Random Forest with Bayesian Optimisation


Instead of only comparing XGBoost and Random Forest in this post we will try to explain how to use those two very popular approaches with Bayesian Optimisation and that are those models main pros and cons. XGBoost (XGB) and Random Forest (RF) both are ensemble learning methods and predict (classification or regression) by combining the outputs from individual decision trees (we assume tree-based XGB or RF). XGBoost build decision tree one each time. Each new tree corrects errors which were made by previously trained decision tree. At Addepto we use XGBoost models to solve anomaly detection problems e.g. in supervised learning approach.

Top Machine Learning and Data Science Methods Used at Work


The practice of data science requires the use algorithms and data science methods to help data professionals extract insights and value from data. A recent survey by Kaggle revealed that data professionals used data visualization, logistic regression, cross-validation and decision trees more than other data science methods in 2017. Looking ahead to 2018, data professionals are most interested in learning deep learning (41%). Kaggle conducted a survey in August 2017 of over 16,000 data professionals (2017 State of Data Science and Machine Learning). Their survey included a variety of questions about data science, machine learning, education and more.

Artificial Intelligence Market Growing at a CAGR of 36.6% and Expected to Reach $190.61 Billion by 2025 - Exclusive Report by MarketsandMarkets


According to the new market research report "Artificial Intelligence Market by Offering (Hardware, Software, Services), Technology (Machine Learning, Natural Language Processing, Context-Aware Computing, Computer Vision), End-User Industry, and Geography - Global Forecast to 2025", published by MarketsandMarkets, the Artificial Intelligence Market is expected to be valued at USD 21.5 billion in 2018 and is likely to reach USD 190.6 billion by 2025, at a CAGR of 36.6% during the forecast period. Major drivers for the market are growing big data, the increasing adoption of cloud-based applications and services, and an increase in demand for intelligent virtual assistants. The major restraint for the market is the limited number of AI technology experts. Critical challenges facing the AI market include concerns regarding data privacy and the unreliability of AI algorithms. Underlying opportunities in the artificial intelligence market include improving operational efficiency in the manufacturing industry and the adoption of AI to improve customer service.

Organisations turn to AI in race against cyber attackers


Companies and public sector organisations say they have no choice but to automate their cyber defences as hacking become increasingly sophisticated. Security professionals can no longer keep pace with the volume and sophistication of attacks on computer systems. In a study of 850 security professionals across 10 countries, more than half said their organisations are overwhelmed with data. So they are turning to machine-learning technologies that can identify cyber attacks by analysing huge quantities of network data and have the potential to block attacks automatically. By 2020, two out of three companies plan to deploy cyber security defences incorporating machine learning and other forms of artificial intelligence (AI), according to the Capgemini study, Reinventing cyber security with artificial intelligence.