valohai
Learn How to Build a Self-Driving Car System with Python
Wikipedia: "A self-driving car, also known as an autonomous vehicle (AV or auto), driverless car, or robo-car is a vehicle that is capable of sensing its environment and moving safely with little or no human input. Self-driving cars combine a variety of sensors to perceive their surroundings, such as radar, lidar, sonar, GPS, odometry, and inertial measurement units. Advanced control systems interpret sensory information to identify appropriate navigation paths, as well as obstacles and relevant signage." Self-driving cars came to be one of the most interesting areas of Artificial Intelligence. With the high growth of companies like Elon Musk's Tesla, which promotes Electrical Powered vehicles as the future of human civilization, it also opens a huge spot for turning those vehicles into AI-powered machines in order to make our day-to-day life easier.
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Information Technology > Robotics & Automation (1.00)
- Automobiles & Trucks (1.00)
The Three Roles in a Machine Learning Team
A trend we've been tracking for several years now is how the data science profession has steered away from being entirely independent, do-it-all unicorns into a more specialized work. It's by no means that individuals with deep knowledge in several domains have disappeared, but rather, the need for data science has grown, and teams have increased in headcount. It's not just that there are more cooks in the kitchen, but also machine learning solutions are much more ambitious in scope. It's becoming more important to think about the competencies of a team rather than expecting every individual to be an expert at everything related to machine learning. This is very similar to software engineering roles diverging into backend, front-end, and DevOps engineers. Each of them focuses on a different part of the machine learning system.
MLOps Brings Best Practices to Developing Machine Learning - insideBIGDATA
In this special guest feature, Henrik Skogström, Head of Growth at Valohai, discusses how MLOps (machine learning operations) is becoming increasingly relevant as it is the next step in scaling and accelerating the development of machine learning capabilities. At Valohai, Henrik spearheads the Valohai MLOps platform's adoption and writes extensively about the best practices around machine learning in production. Before Valohai, Henrik worked as a product manager at Quest Analytics to improve healthcare accessibility in the US. Launched in 2017, Valohai is a pioneer in MLOps and has helped companies such as Twitter, LEGO Group, and JFrog get their models to production quicker. If you are actively participating in developing products with machine learning features, the chances are you've heard about MLOps in the past year.
MLOps Is Changing How Machine Learning Models Are Developed - KDnuggets
MLOps refers to machine learning operations. It is a practice that aims to make machine learning in production efficient and seamless. While the term MLOps is relatively nascent, it draws comparisons to DevOps in that it's not a single piece of technology but rather a shared understanding of how to do things the right way. The shared principles MLOps introduces encourage data scientists to think of machine learning not as individual scientific experiments but as a continuous process to develop, launch, and maintain machine learning capabilities for real-world use. Machine learning should be collaborative, reproducible, continuous, and tested. The practical implementation of MLOps involves both adopting certain best practices and setting up an infrastructure that supports these best practices.
The MLOps Stack
MLOps is a set of best practices that revolve around making machine learning in production more seamless. The purpose is to bridge the gap between experimentation and production with key principles to make machine learning reproducible, collaborative, and continuous. MLOps is not dependent on a single technology or platform. However, technologies play a significant role in practical implementations, similarly to how adopting Scrum often culminates in setting up and onboarding the whole team to e.g. To make it easier to consider what tools your organization could use to adopt MLOps, we've made a simple template that breaks down a machine learning workflow into components.
An Artificial Intelligence Accelerator is Cherry-Picking AI Startups in Finland
Technological infused Finns are envisioning vast opportunities in creating and adopting Artificial Intelligence (AI) solutions in the business landscape. This year at Slush in Helsinki, Finland, the largest technology event for startups and venture capital investors in Europe, there seemed to be an Artificial Intelligence company in each and every direction you were looking. It seems like there is a good reason for that. An accelerator initiated in 2018 by the Ministry of Economic Affairs of Finland and Technology Industries of Finland is helping startups and SME deploy Artificial Intelligence (AI). Finland's Artificial Intelligence Accelerator (FAIA), publishes a curated list of the best AI companies in Finland twice a year.
Scaling Apache Airflow for Machine Learning Workflows
Apache Airflow is a popular platform to create, schedule and monitor workflows in Python. It has more than 15k stars on Github and it's used by data engineers at companies of all sizes including Twitter, Airbnb and Spotify. If you're using Apache Airflow, your architecture has probably evolved based on the number of tasks and their requirements. While working at Skillup, we first had a few hundred DAGs to execute all our data engineering tasks. Then we started doing machine learning.
How to put machine learning models into production
Machine learning is a race. Those companies that can put machine learning models into production, on a large scale, first, will gain a huge advantage over their competitors and billions in potential revenue. But, there is a huge issue with the usability of machine learning -- there is a significant challenge around putting machine learning models into production at scale. Organisations can create incredibly complex machine learning models, but it's problematic to take huge datasets, apply them to different iterations of ML models and then deploy those successful iterations into production. Machine learning is becoming the phrase that data scientists hide from CVs, putting a data science model into production is the biggest data challenge, and companies are still not getting it.
- North America > Canada (0.16)
- Europe (0.05)