Data Integration


Data Engineer, Mid

#artificialintelligence

Work on cutting edge projects from genomic research to counter threats. Perform activities that include data architecture, building data and analytic platforms, building out extract, transform, and load (ETL) pipelines and data access services, and ensuring data is discoverability and of good quality. Work with a multi-disciplinary team of analysts, data engineers, data scientists, developers, and data consumers in an agile fast-paced environment that is pushing the envelope of cutting edge Big Data implementations. Clearance: Applicants selected will be subject to a security investigation and may need to meet eligibility requirements for access to classified information. We're an EOE that empowers our people--no matter their race, color, religion, sex, gender identity, sexual orientation, national origin, disability, veteran status, or other protected characteristic--to fearlessly drive change.


Optimal integration of visual speed across different spatiotemporal frequency channels

Neural Information Processing Systems

How does the human visual system compute the speed of a coherent motion stimulus that contains motion energy in different spatiotemporal frequency bands? Here we propose that perceived speed is the result of optimal integration of speed information from independent spatiotemporal frequency tuned channels. We formalize this hypothesis with a Bayesian observer model that treats the channel activity as independent cues, which are optimally combined with a prior expectation for slow speeds. We test the model against behavioral data from a 2AFC speed discrimination task with which we measured subjects' perceived speed of drifting sinusoidal gratings with different contrasts and spatial frequencies, and of various combinations of these single gratings. We find that perceived speed of the combined stimuli is independent of the relative phase of the underlying grating components, and that the perceptual biases and discrimination thresholds are always smaller for the combined stimuli, supporting the cue combination hypothesis.


Orchestrate Containerized Big Data Integration Jobs with Talend and Apache Airflow

#artificialintelligence

In my last blog I described how to achieve continuous integration, delivery and deployment of Talend Jobs into Docker containers with Maven and Jenkins. This is a good start for reliably building your containerized jobs, but the journey doesn't end there. The next step to go further with containerized jobs is scheduling, orchestrating and monitoring them. While there are plenty of solutions you can take advantage of, I want to introduce an effective way to address this need for containerized Talend jobs in this blog. When it comes to data integration or even big data processing you need to go beyond simple task scheduling.


Data Analyst - IoT BigData Jobs

#artificialintelligence

Essential Skills/ Characteristics: • Highly skilled with SQL and writing queries • Expert in data management principles and data normalization • Must be able to recognize patterns in data and remove "noise" from "insights" • Experience in developing reports and charts to depict "data story" • Exceptional excel and pivot table mastery • Exposure to data modeling and understanding of predictive analysis • Ability to propose improvements to existing system/data base/data structures • Keen ability to articulate data concepts in lamens • Ability to collaborate with remote teams • Extensive experience with data analysis • Experience with ETL tools Winning Ways • Focus on the Customer: Know your customers well; add value with a sense of urgency.


How Machine Learning Supports the Integration Task Force

#artificialintelligence

Data integration methods or tools have undergone a major overhaul in the last few years. Not so long ago, traditional manual methods were employed to integrate data. But as the volume of data increased, these methods became outdated due to their labor-intensive, time-consuming, and error-prone nature. Companies now require in-depth business knowledge, a strong understanding of a diverse set of data schemas, and cognizance of underlying data relationships to perform data integration. With time, organizations have shifted their reliance to newer techniques to bolster data integration.


Data Engineer

#artificialintelligence

Support configuration and ingestion of designated structured, unstructured, and semi-structured data repositories into capabilities to satisfy mission partner requirements. Use database design and implementation tools, such as entity-relationship data modelling and SQL, distributed computing architectures, operating systems, storage technologies, memory management and networking. Work with structured, unstructured, and semi-structured data, streaming and batch data processing, ETL, data wrangling, data ingest, and data access. This position includes work that will be completed internationally, including the MENA region. Applicants selected will be subject to a security investigation and may need to meet eligibility requirements for access to classified information; TS/SCI clearance is required.


Senior Big Data Engineer - IoT BigData Jobs

#artificialintelligence

ByteCubed is currently seeking a Senior Big Data Engineer to join our rapidly growing technology company that believes small empowered teams of talented individuals can make impactful change. The ideal candidate will be working in a dynamic team environment building reusable SaaS components to ingest various types of data into a cloud environment using ETL tools. ByteCubed is an Affirmative Action/Equal Opportunity Employer committed to providing equal employment opportunity without regard to an individual's race, color, religion, age, gender, sexual orientation, veteran status, national origin or disability.


Darts-ip: Data for all

#artificialintelligence

Success in any area is often a combination of three things: talent, hard work and perseverance. For software-as-a-service (SaaS) company Darts-ip, all three were needed to grow a pioneering idea from a handful of people to a 300-strong organisation in just 13 years. The talent came in the form of two groups from very different industries. The service they wanted to offer, to make legal research as easy as possible, came from trademark lawyer and Darts-ip founder Jean-Jo Evrard. While working in Brussels and Paris for law firm NautaDutilh, Evrard was frustrated.


Does Your Business Have A Silo Mentality Problem?

#artificialintelligence

Data integration software and ETL tools provided by the CloverDX platform (formerly known as CloverETL) offer solutions for data management tasks such as data integration, data migration, or data quality. CloverDX is a vital part of enterprise solutions such as data warehousing, business intelligence (BI) or master data management (MDM). CloverDX Designer (formerly known as CloverETL Designer) is a visual data transformation designer that helps define data flows and transformations in a quick, visual, and intuitive way. CloverDX Server formerly known as CloverETL Server) is an enterprise ETL and data integration runtime environment. It offers a set of enterprise features such as automation, monitoring, user management, real-time ETL, data API services, clustering, or cloud data integration.


ETL By Any Other Name Is Still A Challenge, And Machine Learning Can Identify And Manage The Metadata

#artificialintelligence

Extraction, transformation and load (ETL) became a familiar concept in the 1990s, when data warehousing became a well known business intelligence (BI) concept. The advent of the web, and the vast volume of data took many organizations' focus away from ETL to data lakes. Too many people disparaged ETL as a tool of the past. However, as IT has always been aware, data lakes aren't a solution all to themselves and rebranding to ELT doesn't change the fact that there are now far more sources and targets than there ever were. Data movement is still a complex problem and metadata management (MDM), and it's a problem becoming even more challenging as regulatory requirements for privacy mean data must be better tracked and controlled.