If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Is deep learning really going to be able to do everything? Opinions on deep learning's true potential vary. Geoffrey Hinton, awarded for pioneering deep learning, is not entirely unbiased, but others, including Hinton's deep learning collaborator Yoshua Bengio, are looking to infuse deep learning with elements of a domain still under the radar: operations research, or an analytical method of problem-solving and decision-making used in the management of organizations. Machine learning and its deep learning variety are practically household names now. There is a lot of hype around deep learning, as well as a growing number of applications using it.
Car manufacturer BMW and quantum computing technology developer Pasqal have entered a new phase of collaboration to analyze the applicability of quantum computational algorithms to metal forming applications modeling. The automotive industry is one of the most demanding industrial environments, and quantum computing could solve some of the key design and manufacturing issues. According to a report by McKinsey, automotive will be one of the primary value pools for quantum computing, with a high impact noticeable by about 2025. The consulting firm also expects a significant economic impact of related technologies for the automotive industry, estimated at $2 billion to $3 billion, by 2030. Volkswagen Group led the way with the launch of a dedicated quantum computing research team back in 2016.
I have been working on Machine Learning since my third year in college. But during this time, the process always involved taking the dataset from Kaggle or some other open-source website. Also, these models/algorithms were either there on some Jupyter Notebook or Python script and were not deployed to some production website, it was always localhost. While interning at HackerRank and also after starting as a Software Engineer here as a part of the HackerRank Labs team working on a new product, I got a chance to deploy three different ML Models to production, working end-to-end on them. In this blog, I will be sharing my learnings and experience from one of the deployed models.
Graph Neural Network (GNN) models typically assume a full feature vector for each node. The two inputs to this model are the (normalised) adjacency matrix A encoding the graph structure and the feature matrix X containing as rows the node feature vectors and outputs the node embeddings Z. Each layer of GCN performs node-wise feature transformation (parametrised by the learnable matrices W₁ and W₂) and then propagates the transformed feature vectors to the neighboring nodes. Importantly, GCN assumes that all the entries in X are observed. In real-world scenarios, we often see situations where some node features are missing (Fig 1).
In the world of Artificial Intelligence (AI), there is a raging debate on whether or not we can achieve General Artificial Intelligence (GAI). Some believe that it will happen in the next few years, others that it will never come about. Those opposed have some fairly solid arguments as to why not. Essentially, it is when AI becomes "sentient", can truly think on its own, formulate ideas, views and opinions and be an equal to mankind in intellectual terms. Right now, AI, an umbrella term for many different technologies, is known as Narrow AI (NAI.)
When planning an AI-assisted content generation UX/UI (user experience and user interface), these three aspects are to be decided upon: 1) interaction mode: copilot or automatic, 2) work unit (e.g. an image or a full album, document clause or a full document, code function or a micro-service, …), 3) starting point: updating existing content samples or inventing new content from scratch. Let's elaborate on the interaction mode options. In Copilot mode, an AI assistant can, for example, suggest, auto-complete, extend, check, test, and improve the content. Usually done in iterations, guided by the user, and with small work units. In Automatic mode, an AI assistant can, for example, i) replicate previous human actions or preferences and apply them to new samples, or ii) create or compose new samples with certain representation properties.
In this interview, we talk to Takayuki Baba from Fujitsu Research about ongoing research using artificial intelligence to achieve earlier diagnosis of pancreatic cancer. I am Takayuki Baba, and I am researching medical image diagnosis support technology as a "converging technology" that combines image analysis technology and medical science at Fujitsu Research. Converging technologies combine two or more different social sciences and technology areas to achieve a specific goal and represent a major focus of Fujitsu's R & D. Fujitsu Research has a track record in the research and development of technologies for the detection of multiple types of lesions on computed tomography (CT) images with AI and the retrieval of past CT images with a similar distribution of lesions, which are used in medical diagnostic imaging support technologies to help physicians make diagnoses. Fujitsu and the Southern Tohoku General Hospital have started joint research with Fujitsu Japan Limited and FCOM CORPORATION on AI technology for detecting pancreatic cancer from non-contrast CT images through FCOM, which has been supporting the medical system of Southern Tohoku General Hospital. The survival rate for pancreatic cancer is said to be low, as it is often found when it has already progressed to a state that is difficult to treat.
AskSid, a Gupshup company, recently launched a host of artificial intelligence (AI) Bots, including the Product Discovery Bot, which is an AI-enabled plugin that helps retail and CPG brands make the product discovery process seamless and effortless for customers. It allows customer service teams to eliminate human intervention in the product discovery process, minimize discovery time to a matter of seconds, and support customers at every stage of the decision-making process. AskSid's Product Discovery Software and APIs are designed to deliver frictionless shopping experiences. They are powered by AskSid's retail intelligence models that support accurate product recommendations on customer searches, even for long-tailed keywords. AskSid's retail AI models are proficient in identifying product attributes and tags, linking queries with a choice of recommendations that match the search, leading to higher conversions.
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Like innovation before it, co-innovation is quickly becoming both corporate buzzword and gospel. By partnering with tech companies to harness the power of emerging technologies like artificial intelligence and big data, co-innovation has been heralded as essential to future success – especially for businesses that exist outside of the digital realm. But before companies rush to embrace this growing trend in corporate America, they would be well served to keep a few key principles in mind when it comes to co-innovation, lest they fall prey to shiny, new capabilities that look great on paper but can't be implemented or scaled in reality. As the chief strategy and transformation officer at PepsiCo, I oversee our digitalization strategy so I recognize the power of new technologies – we're already using machine learning and data analytics to improve existing systems, processes and products.
In latest days, I've added and checked a new improvement to the model that generates the forecasts published by spxbot.com. As you might know, the input to a neural network is usually preprocessed, for many reasons, usually to eliminate excesses in the raw data and to create a more uniform analysis environment. Even if it may seem bizarre, a lot of documents available on the web agree that adding noise to the input produces a better pattern recognition. In easy words, this process enhance the ability of the neural networks to generalize, or to extract meaning from the inputs, or simply to "see" better. But what is exactly noise?