At Kinaxis, who we are is grounded in our common belief that people matter. Each one of us plays an important part in accomplishing our work, building our culture and making a global impact. Every day, we're empowered to work together to help our customers make fast, confident planning decisions. This is how we create a better planet – for each other, for our customers and for generations to come. Our cloud-based platform RapidResponse ensures that the products we need – everything from medicine and cars, to day-to-day items like toothpaste – make it to market and into our hands when we need them with minimal ecological footprint.
In this tutorial, you'll learn how to run a Python script. When working on data science projects, you'll write Python code all the time… You know that already. But when you start to automate these tasks (either it's data cleaning, data loading, analytics, machine learning algorithms or anything else) you'll rely heavily on scripting. See, in most of my Python for data science tutorials we were writing code in Jupyter Notebooks. But, in data projects, you don't want to run everything manually.
This special issue interrogates the meaning and impacts of "tech ethics": the embedding of ethics into digital technology research, development, use, and governance. In response to concerns about the social harms associated with digital technologies, many individuals and institutions have articulated the need for a greater emphasis on ethics in digital technology. Yet as more groups embrace the concept of ethics, critical discourses have emerged questioning whose ethics are being centered, whether "ethics" is the appropriate frame for improving technology, and what it means to develop "ethical" technology in practice. This interdisciplinary issue takes up these questions, interrogating the relationships among ethics, technology, and society in action. This special issue engages with the normative and contested notions of ethics itself, how ethics has been integrated with technology across domains, and potential paths forward to support more just and egalitarian technology. Rather than starting from philosophical theories, the authors in this issue orient their articles around the real-world discourses and impacts of tech ethics--i.e., tech ethics in action.
Although Raman spectroscopy is widely used for the investigation of biomedical samples and has a high potential for use in clinical applications, it is not common in clinical routines. One of the factors that obstruct the integration of Raman spectroscopic tools into clinical routines is the complexity of the data processing workflow. Software tools that simplify spectroscopic data handling may facilitate such integration by familiarizing clinical experts with the advantages of Raman spectroscopy. Here, RAMANMETRIX is introduced as a user-friendly software with an intuitive web-based graphical user interface (GUI) that incorporates a complete workflow for chemometric analysis of Raman spectra, from raw data pretreatment to a robust validation of machine learning models. The software can be used both for model training and for the application of the pretrained models onto new data sets. Users have full control of the parameters during model training, but the testing data flow is frozen and does not require additional user input. RAMANMETRIX is available in two versions: as standalone software and web application. Due to the modern software architecture, the computational backend part can be executed separately from the GUI and accessed through an application programming interface (API) for applying a preconstructed model to the measured data. This opens up possibilities for using the software as a data processing backend for the measurement devices in real-time. The models preconstructed by more experienced users can be exported and reused for easy one-click data preprocessing and prediction, which requires minimal interaction between the user and the software. The results of such prediction and graphical outputs of the different data processing steps can be exported and saved.
Disruptive technology is the technology that affects the normal operation of a market or an industry. Digital disruption entails established companies and start-ups alike enlisting new technologies in the fight to dislodge incumbents, protect entrenched positions, or to re-invent entire industries and business activities. And to remain disruptive in the market, it is really important to keep innovating. This is crucial because, innovations occur now and then in every industry, however, to be truly disruptive, and innovation must entirely transform a product or solution that historically was so complicated only a few could access it. On a minimum level, digital transformation enables an organization to address the needs of its customers more simply and directly. But through disruptive innovation, companies can offer a far better way to users of doing things that current incumbents simply cannot compete with. Artificial intelligence (AI), E-Commerce, cloud, social networking, Internet of Things, 5G, blockchain and other emerging technologies are being leveraged to blur the lines between industries, creating new business models and converging sectors. A company that disrupts its market is in a great position to take advantage of new opportunities. Sometimes offering something different can change the whole market for the better. Most of the top disruptive companies get this label by offering highly innovative products and services and here are 100 such top disruptive companies listed below. The company provides innovative, managed cloud services to help its customers succeed. With best-in-class service and technology, 403Tech protects companies against cybercrimes while enabling greater efficiency and productivity. Some of its popular services include desktop support, server support, wired and wireless networking, virus removal, data recovery, and backup and hosted cloud services. Aegeus Technologies aims to design and develop robotic technologies and solutions.
Innovative developments in data processing, archiving, analysis, and visualization are nowadays unavoidable to deal with the data deluge expected in next-generation facilities for radio astronomy, such as the Square Kilometre Array (SKA) and its precursors. In this context, the integration of source extraction and analysis algorithms into data visualization tools could significantly improve and speed up the cataloguing process of large area surveys, boosting astronomer productivity and shortening publication time. To this aim, we are developing a visual analytic platform (CIRASA) for advanced source finding and classification, integrating state-of-the-art tools, such as the CAESAR source finder, the ViaLactea Visual Analytic (VLVA) and Knowledge Base (VLKB). In this work, we present the project objectives and the platform architecture, focusing on the implemented source finding services.
The Software Report is pleased to announce The Top 100 Software Companies of 2021. This year's awardee list is comprised of a wide range of companies from the most well-known such as Microsoft, Adobe, and Salesforce to the relatively newer but rapidly growing - Qualtrics, Atlassian, and Asana. A good number of awardees may be new names to some but that should be no surprise given software has always been an industry of startups that seemingly came out of nowhere to create and dominate a new space. Software has become the backbone of our economy. From large enterprises to small businesses, most all rely on software whether for accounting, marketing, sales, supply chain, or a myriad of other functions. Software has become the dominant industry of our time and as such, we place a significance on highlighting the best companies leading the industry forward. The following awardees were nominated and selected based on a thorough evaluation process. Among the key criteria considered were ...
In this paper, we present a new Python library called mPyPl, which is intended to simplify complex data processing tasks using functional approach. This library defines operations on lazy data streams of named dictionaries represented as generators (so-called multi-field datastreams), and allows enriching those data streams with more 'fields' in the process of data preparation and feature extraction. Thus, most data preparation tasks can be expressed in the form of neat linear 'pipeline', similar in syntax to UNIX pipes, or |> functional composition operator in F#. We define basic operations on multi-field data streams, which resemble classical monadic operations, and show similarity of the proposed approach to monads in functional programming. We also show how the library was used in complex deep learning tasks of event detection in video, and discuss different evaluation strategies that allow for different compromises in terms of memory and performance.
Confidential multi-stakeholder machine learning (ML) allows multiple parties to perform collaborative data analytics while not revealing their intellectual property, such as ML source code, model, or datasets. State-of-the-art solutions based on homomorphic encryption incur a large performance overhead. Hardware-based solutions, such as trusted execution environments (TEEs), significantly improve the performance in inference computations but still suffer from low performance in training computations, e.g., deep neural networks model training, because of limited availability of protected memory and lack of GPU support. To address this problem, we designed and implemented Perun, a framework for confidential multi-stakeholder machine learning that allows users to make a trade-off between security and performance. Perun executes ML training on hardware accelerators (e.g., GPU) while providing security guarantees using trusted computing technologies, such as trusted platform module and integrity measurement architecture. Less compute-intensive workloads, such as inference, execute only inside TEE, thus at a lower trusted computing base. The evaluation shows that during the ML training on CIFAR-10 and real-world medical datasets, Perun achieved a 161x to 1560x speedup compared to a pure TEE-based approach.
Our editors have compiled this directory of the best Python books based on Amazon user reviews, rating, and ability to add business value. There are loads of free resources available online (such as Solutions Review's Data Analytics Software Buyer's Guide, visual comparison matrix, and best practices section) and those are great, but sometimes it's best to do things the old fashioned way. There are few resources that can match the in-depth, comprehensive detail of one of the best Power BI books. The editors at Solutions Review have done much of the work for you, curating this comprehensive directory of the best Python books on Amazon. Titles have been selected based on the total number and quality of reader user reviews and ability to add business value. Each of the books listed in the first section of this compilation have met a minimum criteria of 15 reviews and a 4-star-or-better ranking. Below you will find a library of titles from recognized industry analysts, experienced practitioners, and subject matter experts spanning the depths of Python coding for beginners all the way to advanced data science best practices for Python users. This compilation includes publications for practitioners of all skill levels. "Python Crash Course is the world's best-selling guide to the Python programming language. In the first half of the book, you'll learn basic programming concepts, such as variables, lists, classes, and loops, and practice writing clean code with exercises for each topic. You'll also learn how to make your programs interactive and test your code safely before adding it to a project. In the second half, you'll put your new knowledge into practice with three substantial projects: a Space Invaders-inspired arcade game, a set of data visualizations with Python's handy libraries, and a simple web app you can deploy online."