Goto

Collaborating Authors

 developer and data scientist


7 Best Libraries for Machine Learning Explained - KDnuggets

#artificialintelligence

However, the first machine learning libraries as we know them today, which provide tools and frameworks for implementing and training machine learning models, did not appear until the 1980s and 1990s. One of the earliest machine learning libraries was the Statlib library, which was developed at Carnegie Mellon University in the 1980s. This library provided tools for statistical analysis and machine learning, including support for decision trees and neural networks. Other early machine learning libraries include the Weka library, developed at the University of Waikato in New Zealand in the 1990s, and the LIBSVM library developed at the National Taiwan University in the late 1990s. These libraries provided tools for a variety of machine learning tasks, including classification, regression, and clustering.


AI software: The bridge from data to insights

#artificialintelligence

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Artificial Intelligence (AI) everywhere has the potential to transform every business and improve the life of every person on the planet. In fact, every day we hear about AI breaking new ground, from detecting cancer and playing Minecraft, to creating "sentient" chatbots and generating compelling art. The goal of AI is simple: To accelerate "data to insights."


Baseten nabs $20M to make it easier to build machine learning-based applications – TechCrunch

#artificialintelligence

As the tech world inches a closer to the idea of artificial general intelligence, we're seeing another interesting theme emerging in the ongoing democratization of AI: a wave of startups building tech to make AI technologies more accessible overall by a wider range of users and organizations. Today, one of these, Baseten -- which is building tech to make it easier to incorporate machine learning into a business' operations, production and processes without a need for specialized engineering knowledge -- is announcing $20 million in funding and the official launch of its tools. These include a client API and a library of pre-trained models to deploy models built in TensorFlow, PyTorch or scikit-learn; the ability to build APIs to power your own applications; and the ability the create custom UIs for your applications based on drag-and-drop components. The company has been operating in a closed, private beta for about a year and has amassed an interesting group of customers so far, including both Stanford and the University of Sydney, Cockroach Labs and Patreon, among others, who use it to, for example, help organizations with automated abuse detection (through content moderation) and fraud prevention. The $20 million is being discussed publicly for the first time now to coincide with the commercial launch, and it's in two tranches, with equally notable names among those backers.


Using geospatial data to unlock innovation in the property sector

#artificialintelligence

As technology and data capabilities advance, there is an increasing focus on leveraging data that will enable innovation and deliver new value for customers. Geospatial data is an emerging area of opportunity in the property sector, and it is fast being utilised by agile "proptechs", developers and data scientists. Buildings, like everything, occupy space. For residential and commercial enterprises, the contextual data attached to properties is fertile ground for innovation. Whether helping property developers better understand the spatial contexts of sites or providing homebuyers with easier access to the detail they need, geospatial data is being used more than ever to create solutions that deliver value across the sector.


Bodo.ai secures $14M, aims to make Python better at handling large-scale data – TechCrunch

#artificialintelligence

Bodo.ai, a parallel compute platform for data workloads, is developing a compiler to make Python portable and efficient across multiple hardware platforms. It announced Wednesday a $14 million Series A funding round led by Dell Technologies Capital. Python is one of the top programming languages used among artificial intelligence and machine learning developers and data scientists, but as Behzad Nasre, co-founder and CEO of Bodo.ai, points out, it is challenging to use when handling large-scale data. Bodo.ai, headquartered in San Francisco, was founded in 2019 by Nasre and Ehsan Totoni, CTO, to make Python higher performing and production ready. Nasre, who had a long career at Intel before starting Bodo, met Totoni and learned about the project that he was working on to democratize machine learning and enable parallel learning for everyone.


AWS Announces Nine New Amazon SageMaker Capabilities

#artificialintelligence

Distributed Training on Amazon SageMaker delivers new capabilities that can train large models up to two times faster than would otherwise be possible with today's machine learning processors Inc. company, announced nine new capabilities for its industry-leading machine learning service, Amazon SageMaker, making it even easier for developers to automate and scale all steps of the end-to-end machine learning workflow. Today's announcements bring together powerful new capabilities like faster data preparation, a purpose-built repository for prepared data, workflow automation, greater transparency into training data to mitigate bias and explain predictions, distributed training capabilities to train large models up to two times faster, and model monitoring on edge devices. Machine learning is becoming more mainstream, but it is still evolving at a rapid clip. With all the attention machine learning has received, it seems like it should be simple to create machine learning models, but it isn't. In order to create a model, developers need to start with the highly manual process of preparing the data.


Serverless comes to machine learning with container image support in AWS Lambda.

#artificialintelligence

AWS Lambda was released back in 2014, becoming a game-changing technology. By adopting Lambda, many developers have found a new way to build micro-services that could be easily achieved. It comes with many additional advantages such as event-based programming, cloud-native deployment, and the development of the now well-known infrastructure-as-code paradigm. A paradigm-shifting technology like AWS Lambda had to define its own standards to support all the modern app development lifecycle requirements. To make things easy to develop, Lambda decided to offer the easiest way of code project management: the zip file format.


Learn Amazon SageMaker: A guide to building, training, and deploying machine learning models for developers and data scientists: Simon, Julien, Pochetti, Francesco: 9781800208919: Amazon.com: Books

#artificialintelligence

Julien Simon is a principal AI and machine learning developer advocate. He focuses on helping developers and enterprises to bring their ideas to life. He frequently speaks at conferences and blogs on AWS blogs and on Medium. Prior to joining AWS, Julien served for 10 years as CTO/VP of engineering in top-tier web start-ups where he led large software and ops teams in charge of thousands of servers worldwide. In the process, he fought his way through a wide range of technical, business, and procurement issues, which helped him gain a deep understanding of physical infrastructure, its limitations, and how cloud computing can help.

  Industry: Retail > Online (0.40)

Expert System Releases expert.ai Natural Language API

#artificialintelligence

The global Artificial Intelligence company Expert System announced the release of the expert.ai NL API, the cloud-based Natural Language API that enables data scientists, computational linguists, knowledge engineers and developers to easily embed advanced Natural Language Understanding and Natural Language Processing capabilities (NLU / NLP) into their applications. This release is the first step in executing on the company's strategy to become the global platform of reference for AI-based Natural Language problem solving. The growing need for accessible and accurate AI-based NLU / NLP applications in the enterprise places increased demand on the developer ecosystem to bring speed, scale and precision to linguistic analysis. According to Gartner, "during recent years, advances in the application of machine learning (including neural networks) and knowledge graphs to natural language processing have enabled machine-based attribution that diminishes the need for human oversight. Application of the technology is broadening as well as deepening -- across industries and functional domains, and into use cases -- pushing this innovation from many years in the Tough of Disillusionment toward the Slope of Enlightenment."


Build AI you can trust with responsible ML

#artificialintelligence

As AI reaches critical momentum across industries and applications, it becomes essential to ensure the safe and responsible use of AI. AI deployments are increasingly impacted by the lack of customer trust in the transparency, accountability, and fairness of these solutions. Microsoft is committed to the advancement of AI and machine learning (ML), driven by principles that put people first, and tools to enable this in practice. In collaboration with the Aether Committee and its working groups, we are bringing the latest research in responsible AI to Azure. Let's look at how the new responsible ML capabilities in Azure Machine Learning and our open-source toolkits empower data scientists and developers to understand ML models, protect people and their data, and control the end-to-end ML process.