When you think of artificial intelligence (AI), do you imagine Will Smith battling humanoid robots? Well, think again…did you know that AI is already being applied in the Internet, helping you go about your daily life without drawing attention to itself? Artificial intelligence simulates traditionally human processes like learning, reasoning and self-correction. Unlike traditional programs, AI-based applications don't need to be continually fed data or manually coded to make changes to their functionality and output. AI can be (and already is) immensely useful to B2B professionals in all industries.
Companies today are awash in data, but current tools and processes are not enabling them to keep it secure. That's according to Informatica CEO Anil Chakravarthy, whose says his company -- which has traditionally focused on data management and integration -- is embarking on a major push to go further into data security. "You hear about breaches all the time -- just imagine all the ones you're not hearing about," Chakravarthy said in a recent interview. "Data security today is an unsolved problem for customers." Last year, Informatica launched a product called Secure@Source that promises a data-centric approach to information security by helping organizations identify and visualize sensitive data wherever it resides.
Government-funded artificial intelligence programs could soon be organized under a new effort by the General Services Administration. GSA earlier this month created the Data Federation, a site that intends to coordinate the disparate existing data-related efforts at various agencies by sharing standards, case studies and reusable tech tools. On Monday, GSA plans to announce a new community of practice, or subsection dedicated to artificial intelligence, according to Technology Transformation Service data portfolio lead Philip Ashlock. Ashlock was speaking at a Digital Government Institute conference on Thursday. The Data Federation is still in the very early stages, he said--long term, it's working with 18F and the Presidential Innovation Fellows program to develop a "maturity model" to understand how data projects tend to evolve.
There's still a long way to go before complex human traits like humor can be properly emulated by artificial intelligence, but Alphabet Inc. is already starting to inject wit into the research effort. The company last week published a machine learning model called "Parsey McParseface" that can automatically map out the linguist structure of any English-language text. The algorithm, which is hailed as the most accurate of its kind yet, was created using a neural networking system that became available on GitHub at the same time. Alphabet hopes that its contribution will ease the development of virtual assistants and other modern applications that deal with a lot of human-generated information. Equally importantly for the search giant, the move will also cement its position in the open-source machine learning community, which has emerged as a key focus area for the web-scale crowd.
Industry experts estimate that data volumes are doubling in size every two years. Managing all of this is a challenge for any enterprise, but it's not just the volume of data as much as the variety of data that presents a problems. With SaaS and on-premises applications, machine data, and mobile apps all proliferating, we are seeing the rise of an increasingly complicated value-chain ecosystem. IT leaders need to incorporate a portfolio-based approach and combine cloud and on-premises deployment models to sustain competitive advantage. Improving the scale and flexibility of data integration across both environments to deliver a hybrid offering is necessary to provide the right data to the right people at the right time.