square
Data Engineer - Business Intelligence at Block - San Francisco, CA, United States
As a Business Intelligence Data Engineer, you will be responsible for defining, developing and managing curated datasets, key business metrics and reporting across functional units at Square. You will architect, implement and manage data models and ETLs that will enable product and business teams access consistent metrics across Square ecosystem. You are a self-starter and you are comfortable working cross-functionally with other teams across Square. Block takes a market-based approach to pay, and pay may vary depending on your location. U.S. locations are categorized into one of four zones based on a cost of labor index for that geographic area.
- Information Technology > Artificial Intelligence (0.71)
- Information Technology > Data Science > Data Mining (0.62)
Data Engineer - Business Intelligence at Block - San Francisco, CA, United States
As a Business Intelligence Data Engineer, you will be responsible for defining, developing and managing curated datasets, key business metrics and reporting across functional units at Square. You will architect, implement and manage data models and ETLs that will enable product and business teams access consistent metrics across Square ecosystem. You are a self-starter and you are comfortable working cross-functionally with other teams across Square. Block takes a market-based approach to pay, and pay may vary depending on your location. U.S. locations are categorized into one of four zones based on a cost of labor index for that geographic area.
- Information Technology > Artificial Intelligence (0.71)
- Information Technology > Data Science > Data Mining (0.62)
Miko Robotics acquires majority stake in AI chess startup, Square Off
Square Off charmed us at CES 2019, when the startup showed off its robotic chess board at our Hardware Battlefield event. Watching the pieces move on their own, courtesy of underlying AI, grabbed the attention of a jaded crowd of showgoers. This morning, it takes the next step in the startup lifecycle, as Bay Area-based kids robotics firm Miko announces that it has acquired a majority stake of 70% of the firm. "We're thrilled to join forces with Miko on this journey to revolutionize edutainment for kids," Square Off's co-founder and CEO, Bhavya Gohil, says in a short press release tied to the news. Miko, meanwhile, is a Disney Accelerator grad best known for its eponymous toy robot.
SQuARE: Software for Question Answering Research
Have you ever wanted to try Question Answering (QA) models but felt restrained because you needed to write some code to set them up? Have you ever wanted to compare QA models, but a Jupyter Notebook is too inconvenient to compare them? Have you ever wanted to use explainability methods such as saliency maps to explain the outputs, but you don't even know where to start? We have been there too! That's why we built SQuARE: Software for Question Answering Research!
Artificial intelligence is powering chess and India is taking note
Imagine you are playing a chess game online and you've just made your move. Then you are told how Magnus Carlsen would probably respond. You now have to think and make your counter move. This is what IT services and consulting company Tech Mahindra, the digital partner of the 44th FIDE Chess Olympiad in Chennai, is working on--to create an artificial intelligence (AI)-powered "immersive" digital platform for fans, chess lovers, and aficionados. Through this, fans could feel they are part of the game played by the world's top chess players and try to match their move.
- Asia > India > Tamil Nadu > Chennai (0.28)
- North America > United States (0.06)
- Europe > Russia (0.05)
- (2 more...)
K-means Clustering and Principal Component Analysis in 10 Minutes
There are 2 major kinds of machine learning models: supervised and unsupervised. In supervised learning, you have input data X and output data y, then the model finds a map from X to y. In unsupervised learning, you only have input data X. The goal of unsupervised learning varies: clustering observations in X, reducing the dimensionality of X, anomaly detection in X, etc. As supervised learning has been discussed extensively in Part 1 and Part 2 of the series, this story is focused on unsupervised learning.
Einstein, Empathy and AI
Albert Einstein once said: "The ideals that have lighted my way, and time after time have given me new courage to face life cheerfully, have been Kindness, Beauty and Truth." You don't often hear these words in the digital world. How do we integrate these life essentials in technologies like artificial intelligence (AI), machine learning, edge computing, internet of things (IoT) and data at scale? Technology, after all, makes things less personal, right? One company is working hard to disprove this assumption.
Where Do Loss Functions Come From?
We all know that in Linear Regression we aim to minimise the Sum of Squares Error (SSE) as our objective. However, why is it the SSE and where does this expression even come from? In this article I hope to answer this question using something called the Maximum Likelihood Estimator. Spending enough time in the Data Science community I am confident you would have come across the term Maximum Likelihood Estimator (MLE). I am not going to give a super in detail analysis of MLE, primarily because it has been done so many times in different ways that are probably better than I could ever explain it.
Provably Auditing Ordinary Least Squares in Low Dimensions
Measuring the stability of conclusions derived from Ordinary Least Squares linear regression is critically important, but most metrics either only measure local stability (i.e. against infinitesimal changes in the data), or are only interpretable under statistical assumptions. Recent work proposes a simple, global, finite-sample stability metric: the minimum number of samples that need to be removed so that rerunning the analysis overturns the conclusion, specifically meaning that the sign of a particular coefficient of the estimated regressor changes. However, besides the trivial exponential-time algorithm, the only approach for computing this metric is a greedy heuristic that lacks provable guarantees under reasonable, verifiable assumptions; the heuristic provides a loose upper bound on the stability and also cannot certify lower bounds on it. We show that in the low-dimensional regime where the number of covariates is a constant but the number of samples is large, there are efficient algorithms for provably estimating (a fractional version of) this metric. Applying our algorithms to the Boston Housing dataset, we exhibit regression analyses where we can estimate the stability up to a factor of $3$ better than the greedy heuristic, and analyses where we can certify stability to dropping even a majority of the samples.
Prior Knowledge in AI -- Is it Really "Cheating"?
Why were our brains so quick to perceive the two hues as different, while removing the shadow revealed that the hue intensity values are actually the same? It comes down to understanding how our brains reached the primary conclusion -- the way our brains inferred its conclusion. The hope for AGI -- artificial general intelligence -- at least its hope in the 60s and 70s, and perhaps now too -- is to try and emulate human intelligence, and the powerful extrapolation and generalization and inference capabilities our brain has, and combine that with the extraordinary computational powers of computers. So we should try to understand first how our brains might have reached its primary inference. Now, let's think about the variable D, as data -- in this case, the "data" here is the pixel color-intensity values of both center squares arriving to our eyes and registered there.