Goto

Collaborating Authors

Left for Dead, R Surges Again

#artificialintelligence

Don't look now, but R, which some had written off as a language in terminal decline in lieu of Python's immense and growing popularity, appears to be staging a furious comeback the likes of which IT has rarely seen. According to the TIOBE Index, which tracks the popularity of programming languages (as expressed in Web searches), R has risen an unprecedented 12 spots, up from number 20 in the summer of 2019 to number 8 on its list today. That's a huge move, particularly in light of the continued domination of Python as the language of choice for data science. A recent report on data science tools by Anaconda found that 75% of data scientists and analyst report using Python "always" or "frequently," which was by far the most popular language. Only 6% of users reported not using Python, which is quite remarkable when you think about it. To its credit, R was number two in Anaconda's ranking, but it wasn't really close, as only 27% of users reported using R "always" or "frequently."


AI Computing Platform

#artificialintelligence

The cognitiveAI Platform is a cloud-based AI software platform for deriving new levels of Enterprise value from multiple data sources. The AI platform is designed to be cost-effective, easily deployed and used to simplify complexity while unlocking the power of Semantic Computing.


Redefining support with customer support chatbot

#artificialintelligence

Artificial intelligence is profoundly reshaping the customer support scene. From mechanized messages to the visual pursuit, AI enables organizations to interact with their clients through various touchpoints and improve their experience. So, a customer support chatbot and other automation tools help businesses boost customer experience and achieve faster growth. That's the reason why at Engati, we have created two customer support chatbot templates- Telecom Bot and TechDesk Bot. Let's read more about chatbots and their role in customer support.


Are you messing with me softmax ?

#artificialintelligence

Once upon a time I was trying to train a speaker recognition model with TIMIT dataset. I used Alexnet since I wanted to try this with a smaller model first. I have used a softmax layer at the end. The inputs were spectrograms of voices of different people and labels were the speaker IDs. MSELoss was used with the PyTorch library.


Global Big Data Conference

#artificialintelligence

Today, every organization in almost every industry is keen to leverage the power of artificial intelligence (AI) to better understand its business, clients, products, and processes. The applications of AI continue to grow. Data is everywhere, but to make it speak we need the right project goals, mindset and resources. Being a black belt in martial arts helped me reinforce six core life skills in my personal and professional life – belief, communication, respect, honesty, self-esteem, and discipline. As I continued to train, I realized that AI project principals are no different.


Using Machine Learning to Predict Fitbit Sleep Scores

#artificialintelligence

Before we do any further analysis using our data we need to split the entire data set into three different subsets: training set, validation set and test set. The test set is also referred to as hold-out set and once we split it from the remaining data we do not touch it again until we have trained and tweaked our Machine Learning models to a point where we think they will perform well on data that they have never seen before. We split the remaining data into a training and a validation set. This allows us to train our models on the training data and then evaluate their performance on the validation data. In theory, we can then go and tweak our models and evaluate them on the validation data again and thereby find ways to improve model performance.


Create Apache Spark machine learning pipeline - Azure HDInsight

#artificialintelligence

To demonstrate a practical use of an ML pipeline, this example uses the sample HVAC.csv data file that comes pre-loaded on the default storage for your HDInsight cluster, either Azure Storage or Data Lake Storage. HVAC.csv contains a set of times with both target and actual temperatures for HVAC (heating, ventilation, and air conditioning) systems in various buildings. The goal is to train the model on the data, and produce a forecast temperature for a given building.


Building Image Classifiers made easy with Azure Custom Vision

#artificialintelligence

In our previous blog, we outlined that Supervised Machine Learning (ML) models need labeled data, but majority of the data collected in the raw format lacks labels. So, the first step before building a ML model would be to get the raw data labeled by domain experts. To do so, we had outlined how Doccano is an easy tool for collaborative text annotation. However, not all data that gets collected is in text format, many a times we end up with a bunch of images but the end goal is again to build a Supervised ML model. Like stated previously, the first step would be to tag these images with specific labels.


Full-stack Developer

#artificialintelligence

Full-stack Developer with a front-end preference is sought after by a data analytics company in Essex. Due to recent growth and funding, the company are looking to grow their development team to work on their bespoke client portal and mobile app for their international clients. You will also have the opportunity to get involved with their AI projects involving Computer Vision and 3D modelling. The following would be nice to have, however, if there is something you do not have experience with, you would have the opportunity to be trained accordingly. The technologies, various projects and predicted growth make this role a challenging and exciting prospect for any successful candidate, especially if you are interested in the AI space and appreciate the opportunity to develop and learn new skills.


Exploring GPT-3: A New Breakthrough in Language Generation - KDnuggets

#artificialintelligence

It seems like only last year that we were arguing about whether the slow-release rollout of the 1.5 billion parameter Generative Pretrained Transformer-2 (GPT-2) was reasonable. If the debate seems recent, that's because it is (writing from 2020): The notorious GPT-2 model was announced by OpenAI in February 2019, but it wasn't fully released until nearly 9 months later (although it was replicated before that). The release schedule was admittedly somewhat experimental, meant more to foster discussion of responsible open publishing, rather than a last-ditch effort to avert an AI apocalypse. All that is a bit moot by now because not only has OpenAI trained a much larger language model in GPT-3, but you can sign up to access it through their new API. Comparing GPT-3 to GPT-2 is like comparing apples to, well, raisins, because the model is about that much larger.